00:00:00.000 Started by upstream project "autotest-per-patch" build number 132090 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.041 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.083 Using shallow fetch with depth 1 00:00:00.083 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.083 > git --version # timeout=10 00:00:00.145 > git --version # 'git version 2.39.2' 00:00:00.145 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.185 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.185 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.441 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.455 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.470 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.470 > git config core.sparsecheckout # timeout=10 00:00:03.481 > git read-tree -mu HEAD # timeout=10 00:00:03.499 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.519 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.519 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.633 [Pipeline] Start of Pipeline 00:00:03.647 [Pipeline] library 00:00:03.648 Loading library shm_lib@master 00:00:03.648 Library shm_lib@master is cached. Copying from home. 00:00:03.663 [Pipeline] node 00:00:18.665 Still waiting to schedule task 00:00:18.665 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:21.092 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest 00:07:21.094 [Pipeline] { 00:07:21.107 [Pipeline] catchError 00:07:21.109 [Pipeline] { 00:07:21.122 [Pipeline] wrap 00:07:21.131 [Pipeline] { 00:07:21.139 [Pipeline] stage 00:07:21.141 [Pipeline] { (Prologue) 00:07:21.161 [Pipeline] echo 00:07:21.163 Node: VM-host-WFP1 00:07:21.171 [Pipeline] cleanWs 00:07:21.181 [WS-CLEANUP] Deleting project workspace... 00:07:21.181 [WS-CLEANUP] Deferred wipeout is used... 00:07:21.187 [WS-CLEANUP] done 00:07:21.379 [Pipeline] setCustomBuildProperty 00:07:21.453 [Pipeline] httpRequest 00:07:21.861 [Pipeline] echo 00:07:21.863 Sorcerer 10.211.164.101 is alive 00:07:21.874 [Pipeline] retry 00:07:21.876 [Pipeline] { 00:07:21.890 [Pipeline] httpRequest 00:07:21.894 HttpMethod: GET 00:07:21.895 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:07:21.896 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:07:21.911 Response Code: HTTP/1.1 200 OK 00:07:21.912 Success: Status code 200 is in the accepted range: 200,404 00:07:21.913 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:07:26.722 [Pipeline] } 00:07:26.739 [Pipeline] // retry 00:07:26.748 [Pipeline] sh 00:07:27.028 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:07:27.043 [Pipeline] httpRequest 00:07:27.463 [Pipeline] echo 00:07:27.465 Sorcerer 10.211.164.101 is alive 00:07:27.475 [Pipeline] retry 00:07:27.477 [Pipeline] { 00:07:27.492 [Pipeline] httpRequest 00:07:27.498 HttpMethod: GET 00:07:27.498 URL: http://10.211.164.101/packages/spdk_cc533a3e572d8a2256a4e2c932c1dc0c86786c4a.tar.gz 00:07:27.499 Sending request to url: http://10.211.164.101/packages/spdk_cc533a3e572d8a2256a4e2c932c1dc0c86786c4a.tar.gz 00:07:27.509 Response Code: HTTP/1.1 200 OK 00:07:27.510 Success: Status code 200 is in the accepted range: 200,404 00:07:27.511 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_cc533a3e572d8a2256a4e2c932c1dc0c86786c4a.tar.gz 00:07:44.073 [Pipeline] } 00:07:44.091 [Pipeline] // retry 00:07:44.099 [Pipeline] sh 00:07:44.382 + tar --no-same-owner -xf spdk_cc533a3e572d8a2256a4e2c932c1dc0c86786c4a.tar.gz 00:07:46.931 [Pipeline] sh 00:07:47.214 + git -C spdk log --oneline -n5 00:07:47.214 cc533a3e5 nvme/nvme: Factor out submit_request function 00:07:47.214 117895738 accel/mlx5: Factor out task submissions 00:07:47.214 af0187bf9 nvme/rdma: Remove qpair::max_recv_sge as unused 00:07:47.214 f0e4b91ff nvme/rdma: Add likely/unlikely to IO path 00:07:47.214 51bde6628 nvme/rdma: Factor our contig request preparation 00:07:47.234 [Pipeline] writeFile 00:07:47.250 [Pipeline] sh 00:07:47.563 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:47.575 [Pipeline] sh 00:07:47.854 + cat autorun-spdk.conf 00:07:47.854 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:47.854 SPDK_RUN_ASAN=1 00:07:47.854 SPDK_RUN_UBSAN=1 00:07:47.854 SPDK_TEST_RAID=1 00:07:47.854 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:47.861 RUN_NIGHTLY=0 00:07:47.863 [Pipeline] } 00:07:47.875 [Pipeline] // stage 00:07:47.890 [Pipeline] stage 00:07:47.892 [Pipeline] { (Run VM) 00:07:47.902 [Pipeline] sh 00:07:48.181 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:48.181 + echo 'Start stage prepare_nvme.sh' 00:07:48.181 Start stage prepare_nvme.sh 00:07:48.181 + [[ -n 7 ]] 00:07:48.181 + disk_prefix=ex7 00:07:48.181 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:07:48.181 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:07:48.181 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:07:48.181 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:48.181 ++ SPDK_RUN_ASAN=1 00:07:48.181 ++ SPDK_RUN_UBSAN=1 00:07:48.181 ++ SPDK_TEST_RAID=1 00:07:48.181 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:48.181 ++ RUN_NIGHTLY=0 00:07:48.181 + cd /var/jenkins/workspace/raid-vg-autotest 00:07:48.181 + nvme_files=() 00:07:48.181 + declare -A nvme_files 00:07:48.181 + backend_dir=/var/lib/libvirt/images/backends 00:07:48.181 + nvme_files['nvme.img']=5G 00:07:48.181 + nvme_files['nvme-cmb.img']=5G 00:07:48.181 + nvme_files['nvme-multi0.img']=4G 00:07:48.181 + nvme_files['nvme-multi1.img']=4G 00:07:48.181 + nvme_files['nvme-multi2.img']=4G 00:07:48.181 + nvme_files['nvme-openstack.img']=8G 00:07:48.181 + nvme_files['nvme-zns.img']=5G 00:07:48.181 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:48.181 + (( SPDK_TEST_FTL == 1 )) 00:07:48.181 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:48.181 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:48.181 + for nvme in "${!nvme_files[@]}" 00:07:48.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:07:48.181 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:48.181 + for nvme in "${!nvme_files[@]}" 00:07:48.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:07:48.181 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:48.181 + for nvme in "${!nvme_files[@]}" 00:07:48.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:07:48.181 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:48.181 + for nvme in "${!nvme_files[@]}" 00:07:48.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:07:48.181 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:48.181 + for nvme in "${!nvme_files[@]}" 00:07:48.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:07:48.181 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:48.181 + for nvme in "${!nvme_files[@]}" 00:07:48.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:07:48.440 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:48.440 + for nvme in "${!nvme_files[@]}" 00:07:48.440 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:07:48.440 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:48.440 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:07:48.440 + echo 'End stage prepare_nvme.sh' 00:07:48.440 End stage prepare_nvme.sh 00:07:48.451 [Pipeline] sh 00:07:48.734 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:48.734 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:07:48.734 00:07:48.734 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:07:48.734 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:07:48.734 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:07:48.734 HELP=0 00:07:48.734 DRY_RUN=0 00:07:48.734 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:07:48.734 NVME_DISKS_TYPE=nvme,nvme, 00:07:48.734 NVME_AUTO_CREATE=0 00:07:48.734 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:07:48.734 NVME_CMB=,, 00:07:48.734 NVME_PMR=,, 00:07:48.734 NVME_ZNS=,, 00:07:48.734 NVME_MS=,, 00:07:48.734 NVME_FDP=,, 00:07:48.734 SPDK_VAGRANT_DISTRO=fedora39 00:07:48.734 SPDK_VAGRANT_VMCPU=10 00:07:48.734 SPDK_VAGRANT_VMRAM=12288 00:07:48.734 SPDK_VAGRANT_PROVIDER=libvirt 00:07:48.734 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:48.734 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:48.734 SPDK_OPENSTACK_NETWORK=0 00:07:48.734 VAGRANT_PACKAGE_BOX=0 00:07:48.734 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:48.734 FORCE_DISTRO=true 00:07:48.734 VAGRANT_BOX_VERSION= 00:07:48.734 EXTRA_VAGRANTFILES= 00:07:48.734 NIC_MODEL=e1000 00:07:48.734 00:07:48.734 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:07:48.734 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:07:51.269 Bringing machine 'default' up with 'libvirt' provider... 00:07:52.664 ==> default: Creating image (snapshot of base box volume). 00:07:52.664 ==> default: Creating domain with the following settings... 00:07:52.664 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730883650_618cbe1af50c8838d800 00:07:52.664 ==> default: -- Domain type: kvm 00:07:52.664 ==> default: -- Cpus: 10 00:07:52.664 ==> default: -- Feature: acpi 00:07:52.664 ==> default: -- Feature: apic 00:07:52.664 ==> default: -- Feature: pae 00:07:52.664 ==> default: -- Memory: 12288M 00:07:52.664 ==> default: -- Memory Backing: hugepages: 00:07:52.664 ==> default: -- Management MAC: 00:07:52.664 ==> default: -- Loader: 00:07:52.664 ==> default: -- Nvram: 00:07:52.664 ==> default: -- Base box: spdk/fedora39 00:07:52.664 ==> default: -- Storage pool: default 00:07:52.664 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730883650_618cbe1af50c8838d800.img (20G) 00:07:52.664 ==> default: -- Volume Cache: default 00:07:52.664 ==> default: -- Kernel: 00:07:52.664 ==> default: -- Initrd: 00:07:52.664 ==> default: -- Graphics Type: vnc 00:07:52.664 ==> default: -- Graphics Port: -1 00:07:52.664 ==> default: -- Graphics IP: 127.0.0.1 00:07:52.664 ==> default: -- Graphics Password: Not defined 00:07:52.664 ==> default: -- Video Type: cirrus 00:07:52.664 ==> default: -- Video VRAM: 9216 00:07:52.664 ==> default: -- Sound Type: 00:07:52.664 ==> default: -- Keymap: en-us 00:07:52.664 ==> default: -- TPM Path: 00:07:52.664 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:52.665 ==> default: -- Command line args: 00:07:52.665 ==> default: -> value=-device, 00:07:52.665 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:52.665 ==> default: -> value=-drive, 00:07:52.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:07:52.665 ==> default: -> value=-device, 00:07:52.665 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:52.665 ==> default: -> value=-device, 00:07:52.665 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:52.665 ==> default: -> value=-drive, 00:07:52.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:52.665 ==> default: -> value=-device, 00:07:52.665 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:52.665 ==> default: -> value=-drive, 00:07:52.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:52.665 ==> default: -> value=-device, 00:07:52.665 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:52.665 ==> default: -> value=-drive, 00:07:52.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:52.665 ==> default: -> value=-device, 00:07:52.665 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:52.923 ==> default: Creating shared folders metadata... 00:07:52.923 ==> default: Starting domain. 00:07:54.832 ==> default: Waiting for domain to get an IP address... 00:08:12.921 ==> default: Waiting for SSH to become available... 00:08:12.921 ==> default: Configuring and enabling network interfaces... 00:08:18.190 default: SSH address: 192.168.121.3:22 00:08:18.190 default: SSH username: vagrant 00:08:18.190 default: SSH auth method: private key 00:08:20.727 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:30.706 ==> default: Mounting SSHFS shared folder... 00:08:31.655 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:08:31.655 ==> default: Checking Mount.. 00:08:33.036 ==> default: Folder Successfully Mounted! 00:08:33.036 ==> default: Running provisioner: file... 00:08:34.419 default: ~/.gitconfig => .gitconfig 00:08:34.679 00:08:34.679 SUCCESS! 00:08:34.679 00:08:34.679 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:08:34.679 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:34.679 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:08:34.679 00:08:34.688 [Pipeline] } 00:08:34.704 [Pipeline] // stage 00:08:34.714 [Pipeline] dir 00:08:34.715 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:08:34.716 [Pipeline] { 00:08:34.730 [Pipeline] catchError 00:08:34.732 [Pipeline] { 00:08:34.747 [Pipeline] sh 00:08:35.031 + vagrant ssh-config --host vagrant 00:08:35.031 + sed -ne /^Host/,$p 00:08:35.031 + tee ssh_conf 00:08:38.322 Host vagrant 00:08:38.322 HostName 192.168.121.3 00:08:38.322 User vagrant 00:08:38.322 Port 22 00:08:38.322 UserKnownHostsFile /dev/null 00:08:38.322 StrictHostKeyChecking no 00:08:38.322 PasswordAuthentication no 00:08:38.322 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:08:38.322 IdentitiesOnly yes 00:08:38.322 LogLevel FATAL 00:08:38.322 ForwardAgent yes 00:08:38.322 ForwardX11 yes 00:08:38.322 00:08:38.338 [Pipeline] withEnv 00:08:38.340 [Pipeline] { 00:08:38.355 [Pipeline] sh 00:08:38.637 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:38.637 source /etc/os-release 00:08:38.637 [[ -e /image.version ]] && img=$(< /image.version) 00:08:38.637 # Minimal, systemd-like check. 00:08:38.637 if [[ -e /.dockerenv ]]; then 00:08:38.637 # Clear garbage from the node's name: 00:08:38.637 # agt-er_autotest_547-896 -> autotest_547-896 00:08:38.637 # $HOSTNAME is the actual container id 00:08:38.637 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:38.637 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:08:38.637 # We can assume this is a mount from a host where container is running, 00:08:38.637 # so fetch its hostname to easily identify the target swarm worker. 00:08:38.637 container="$(< /etc/hostname) ($agent)" 00:08:38.637 else 00:08:38.637 # Fallback 00:08:38.637 container=$agent 00:08:38.637 fi 00:08:38.637 fi 00:08:38.637 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:38.637 00:08:38.910 [Pipeline] } 00:08:38.928 [Pipeline] // withEnv 00:08:38.938 [Pipeline] setCustomBuildProperty 00:08:38.955 [Pipeline] stage 00:08:38.957 [Pipeline] { (Tests) 00:08:38.979 [Pipeline] sh 00:08:39.261 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:39.537 [Pipeline] sh 00:08:39.825 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:08:40.103 [Pipeline] timeout 00:08:40.104 Timeout set to expire in 1 hr 30 min 00:08:40.106 [Pipeline] { 00:08:40.122 [Pipeline] sh 00:08:40.406 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:40.974 HEAD is now at cc533a3e5 nvme/nvme: Factor out submit_request function 00:08:40.986 [Pipeline] sh 00:08:41.265 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:41.538 [Pipeline] sh 00:08:41.822 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:42.098 [Pipeline] sh 00:08:42.381 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:08:42.644 ++ readlink -f spdk_repo 00:08:42.644 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:42.644 + [[ -n /home/vagrant/spdk_repo ]] 00:08:42.644 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:42.644 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:42.644 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:42.644 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:42.644 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:42.644 + [[ raid-vg-autotest == pkgdep-* ]] 00:08:42.644 + cd /home/vagrant/spdk_repo 00:08:42.644 + source /etc/os-release 00:08:42.644 ++ NAME='Fedora Linux' 00:08:42.644 ++ VERSION='39 (Cloud Edition)' 00:08:42.644 ++ ID=fedora 00:08:42.644 ++ VERSION_ID=39 00:08:42.644 ++ VERSION_CODENAME= 00:08:42.644 ++ PLATFORM_ID=platform:f39 00:08:42.644 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:08:42.644 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:42.644 ++ LOGO=fedora-logo-icon 00:08:42.644 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:08:42.644 ++ HOME_URL=https://fedoraproject.org/ 00:08:42.644 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:08:42.644 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:42.644 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:42.644 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:42.644 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:08:42.644 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:42.644 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:08:42.644 ++ SUPPORT_END=2024-11-12 00:08:42.644 ++ VARIANT='Cloud Edition' 00:08:42.644 ++ VARIANT_ID=cloud 00:08:42.644 + uname -a 00:08:42.644 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:08:42.644 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:43.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.212 Hugepages 00:08:43.212 node hugesize free / total 00:08:43.212 node0 1048576kB 0 / 0 00:08:43.212 node0 2048kB 0 / 0 00:08:43.212 00:08:43.212 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:43.212 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:43.212 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:43.212 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:43.212 + rm -f /tmp/spdk-ld-path 00:08:43.212 + source autorun-spdk.conf 00:08:43.212 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:43.212 ++ SPDK_RUN_ASAN=1 00:08:43.212 ++ SPDK_RUN_UBSAN=1 00:08:43.212 ++ SPDK_TEST_RAID=1 00:08:43.212 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:43.212 ++ RUN_NIGHTLY=0 00:08:43.212 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:43.212 + [[ -n '' ]] 00:08:43.212 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:43.212 + for M in /var/spdk/build-*-manifest.txt 00:08:43.212 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:08:43.212 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:43.212 + for M in /var/spdk/build-*-manifest.txt 00:08:43.212 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:43.212 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:43.212 + for M in /var/spdk/build-*-manifest.txt 00:08:43.212 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:43.212 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:43.212 ++ uname 00:08:43.212 + [[ Linux == \L\i\n\u\x ]] 00:08:43.212 + sudo dmesg -T 00:08:43.471 + sudo dmesg --clear 00:08:43.471 + dmesg_pid=5207 00:08:43.471 + sudo dmesg -Tw 00:08:43.471 + [[ Fedora Linux == FreeBSD ]] 00:08:43.471 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:43.471 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:43.471 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:43.471 + [[ -x /usr/src/fio-static/fio ]] 00:08:43.471 + export FIO_BIN=/usr/src/fio-static/fio 00:08:43.471 + FIO_BIN=/usr/src/fio-static/fio 00:08:43.471 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:43.471 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:43.471 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:43.471 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:43.471 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:43.471 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:43.471 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:43.471 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:43.471 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:43.471 09:01:42 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:08:43.471 09:01:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:43.471 09:01:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:43.471 09:01:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:08:43.471 09:01:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:08:43.471 09:01:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:08:43.471 09:01:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:43.471 09:01:42 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:08:43.471 09:01:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:08:43.471 09:01:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:43.471 09:01:42 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:08:43.471 09:01:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.471 09:01:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:43.471 09:01:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:43.471 09:01:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.471 09:01:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.471 09:01:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.471 09:01:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.471 09:01:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.471 09:01:42 -- paths/export.sh@5 -- $ export PATH 00:08:43.471 09:01:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.471 09:01:42 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:43.471 09:01:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:08:43.471 09:01:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730883702.XXXXXX 00:08:43.471 09:01:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730883702.9est3A 00:08:43.471 09:01:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:08:43.471 09:01:42 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:08:43.471 09:01:42 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:43.471 09:01:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:43.471 09:01:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:43.471 09:01:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:08:43.471 09:01:42 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:08:43.471 09:01:42 -- common/autotest_common.sh@10 -- $ set +x 00:08:43.730 09:01:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:08:43.730 09:01:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:08:43.730 09:01:42 -- pm/common@17 -- $ local monitor 00:08:43.730 09:01:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.730 09:01:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.730 09:01:42 -- pm/common@25 -- $ sleep 1 00:08:43.730 09:01:42 -- pm/common@21 -- $ date +%s 00:08:43.730 09:01:42 -- pm/common@21 -- $ date +%s 00:08:43.730 09:01:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730883702 00:08:43.730 09:01:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730883702 00:08:43.730 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730883702_collect-cpu-load.pm.log 00:08:43.730 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730883702_collect-vmstat.pm.log 00:08:44.672 09:01:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:08:44.672 09:01:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:44.672 09:01:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:44.672 09:01:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:44.672 09:01:43 -- spdk/autobuild.sh@16 -- $ date -u 00:08:44.672 Wed Nov 6 09:01:43 AM UTC 2024 00:08:44.672 09:01:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:44.672 v25.01-pre-166-gcc533a3e5 00:08:44.672 09:01:43 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:08:44.672 09:01:43 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:08:44.672 09:01:43 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:08:44.672 09:01:43 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:08:44.672 09:01:43 -- common/autotest_common.sh@10 -- $ set +x 00:08:44.672 ************************************ 00:08:44.672 START TEST asan 00:08:44.672 ************************************ 00:08:44.672 using asan 00:08:44.672 09:01:43 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:08:44.672 00:08:44.672 real 0m0.000s 00:08:44.672 user 0m0.000s 00:08:44.672 sys 0m0.000s 00:08:44.672 09:01:43 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:08:44.672 ************************************ 00:08:44.672 END TEST asan 00:08:44.672 ************************************ 00:08:44.672 09:01:43 asan -- common/autotest_common.sh@10 -- $ set +x 00:08:44.672 09:01:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:44.672 09:01:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:44.672 09:01:43 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:08:44.672 09:01:43 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:08:44.672 09:01:43 -- common/autotest_common.sh@10 -- $ set +x 00:08:44.672 ************************************ 00:08:44.672 START TEST ubsan 00:08:44.672 ************************************ 00:08:44.672 using ubsan 00:08:44.672 09:01:43 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:08:44.672 00:08:44.672 real 0m0.000s 00:08:44.672 user 0m0.000s 00:08:44.672 sys 0m0.000s 00:08:44.672 09:01:43 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:08:44.672 09:01:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:44.672 ************************************ 00:08:44.672 END TEST ubsan 00:08:44.672 ************************************ 00:08:44.672 09:01:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:44.672 09:01:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:44.672 09:01:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:44.672 09:01:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:44.672 09:01:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:44.672 09:01:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:44.672 09:01:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:44.672 09:01:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:44.672 09:01:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:08:44.960 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:44.960 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:45.529 Using 'verbs' RDMA provider 00:09:01.352 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:19.456 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:19.456 Creating mk/config.mk...done. 00:09:19.456 Creating mk/cc.flags.mk...done. 00:09:19.456 Type 'make' to build. 00:09:19.456 09:02:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:09:19.456 09:02:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:09:19.456 09:02:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:09:19.456 09:02:16 -- common/autotest_common.sh@10 -- $ set +x 00:09:19.456 ************************************ 00:09:19.456 START TEST make 00:09:19.456 ************************************ 00:09:19.456 09:02:16 make -- common/autotest_common.sh@1127 -- $ make -j10 00:09:19.456 make[1]: Nothing to be done for 'all'. 00:09:29.431 The Meson build system 00:09:29.431 Version: 1.5.0 00:09:29.431 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:29.431 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:29.431 Build type: native build 00:09:29.431 Program cat found: YES (/usr/bin/cat) 00:09:29.431 Project name: DPDK 00:09:29.431 Project version: 24.03.0 00:09:29.431 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:09:29.431 C linker for the host machine: cc ld.bfd 2.40-14 00:09:29.431 Host machine cpu family: x86_64 00:09:29.431 Host machine cpu: x86_64 00:09:29.431 Message: ## Building in Developer Mode ## 00:09:29.431 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:29.431 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:29.431 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:29.431 Program python3 found: YES (/usr/bin/python3) 00:09:29.431 Program cat found: YES (/usr/bin/cat) 00:09:29.431 Compiler for C supports arguments -march=native: YES 00:09:29.431 Checking for size of "void *" : 8 00:09:29.431 Checking for size of "void *" : 8 (cached) 00:09:29.431 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:09:29.431 Library m found: YES 00:09:29.431 Library numa found: YES 00:09:29.431 Has header "numaif.h" : YES 00:09:29.431 Library fdt found: NO 00:09:29.431 Library execinfo found: NO 00:09:29.431 Has header "execinfo.h" : YES 00:09:29.431 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:09:29.431 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:29.431 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:29.431 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:29.431 Run-time dependency openssl found: YES 3.1.1 00:09:29.431 Run-time dependency libpcap found: YES 1.10.4 00:09:29.431 Has header "pcap.h" with dependency libpcap: YES 00:09:29.431 Compiler for C supports arguments -Wcast-qual: YES 00:09:29.431 Compiler for C supports arguments -Wdeprecated: YES 00:09:29.431 Compiler for C supports arguments -Wformat: YES 00:09:29.431 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:29.431 Compiler for C supports arguments -Wformat-security: NO 00:09:29.431 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:29.431 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:29.431 Compiler for C supports arguments -Wnested-externs: YES 00:09:29.431 Compiler for C supports arguments -Wold-style-definition: YES 00:09:29.431 Compiler for C supports arguments -Wpointer-arith: YES 00:09:29.431 Compiler for C supports arguments -Wsign-compare: YES 00:09:29.431 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:29.431 Compiler for C supports arguments -Wundef: YES 00:09:29.431 Compiler for C supports arguments -Wwrite-strings: YES 00:09:29.431 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:29.431 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:29.431 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:29.431 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:29.431 Program objdump found: YES (/usr/bin/objdump) 00:09:29.431 Compiler for C supports arguments -mavx512f: YES 00:09:29.431 Checking if "AVX512 checking" compiles: YES 00:09:29.431 Fetching value of define "__SSE4_2__" : 1 00:09:29.431 Fetching value of define "__AES__" : 1 00:09:29.431 Fetching value of define "__AVX__" : 1 00:09:29.431 Fetching value of define "__AVX2__" : 1 00:09:29.431 Fetching value of define "__AVX512BW__" : 1 00:09:29.431 Fetching value of define "__AVX512CD__" : 1 00:09:29.431 Fetching value of define "__AVX512DQ__" : 1 00:09:29.431 Fetching value of define "__AVX512F__" : 1 00:09:29.431 Fetching value of define "__AVX512VL__" : 1 00:09:29.431 Fetching value of define "__PCLMUL__" : 1 00:09:29.431 Fetching value of define "__RDRND__" : 1 00:09:29.431 Fetching value of define "__RDSEED__" : 1 00:09:29.431 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:29.431 Fetching value of define "__znver1__" : (undefined) 00:09:29.431 Fetching value of define "__znver2__" : (undefined) 00:09:29.431 Fetching value of define "__znver3__" : (undefined) 00:09:29.431 Fetching value of define "__znver4__" : (undefined) 00:09:29.431 Library asan found: YES 00:09:29.431 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:29.431 Message: lib/log: Defining dependency "log" 00:09:29.431 Message: lib/kvargs: Defining dependency "kvargs" 00:09:29.431 Message: lib/telemetry: Defining dependency "telemetry" 00:09:29.431 Library rt found: YES 00:09:29.431 Checking for function "getentropy" : NO 00:09:29.431 Message: lib/eal: Defining dependency "eal" 00:09:29.431 Message: lib/ring: Defining dependency "ring" 00:09:29.431 Message: lib/rcu: Defining dependency "rcu" 00:09:29.431 Message: lib/mempool: Defining dependency "mempool" 00:09:29.431 Message: lib/mbuf: Defining dependency "mbuf" 00:09:29.431 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:29.431 Fetching value of define "__AVX512F__" : 1 (cached) 00:09:29.431 Fetching value of define "__AVX512BW__" : 1 (cached) 00:09:29.431 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:09:29.431 Fetching value of define "__AVX512VL__" : 1 (cached) 00:09:29.431 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:09:29.431 Compiler for C supports arguments -mpclmul: YES 00:09:29.431 Compiler for C supports arguments -maes: YES 00:09:29.431 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:29.431 Compiler for C supports arguments -mavx512bw: YES 00:09:29.431 Compiler for C supports arguments -mavx512dq: YES 00:09:29.431 Compiler for C supports arguments -mavx512vl: YES 00:09:29.431 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:29.431 Compiler for C supports arguments -mavx2: YES 00:09:29.431 Compiler for C supports arguments -mavx: YES 00:09:29.431 Message: lib/net: Defining dependency "net" 00:09:29.431 Message: lib/meter: Defining dependency "meter" 00:09:29.431 Message: lib/ethdev: Defining dependency "ethdev" 00:09:29.431 Message: lib/pci: Defining dependency "pci" 00:09:29.431 Message: lib/cmdline: Defining dependency "cmdline" 00:09:29.431 Message: lib/hash: Defining dependency "hash" 00:09:29.431 Message: lib/timer: Defining dependency "timer" 00:09:29.431 Message: lib/compressdev: Defining dependency "compressdev" 00:09:29.431 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:29.431 Message: lib/dmadev: Defining dependency "dmadev" 00:09:29.431 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:29.431 Message: lib/power: Defining dependency "power" 00:09:29.431 Message: lib/reorder: Defining dependency "reorder" 00:09:29.431 Message: lib/security: Defining dependency "security" 00:09:29.431 Has header "linux/userfaultfd.h" : YES 00:09:29.431 Has header "linux/vduse.h" : YES 00:09:29.431 Message: lib/vhost: Defining dependency "vhost" 00:09:29.431 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:29.431 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:29.431 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:29.431 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:29.431 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:29.431 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:29.431 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:29.431 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:29.431 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:29.431 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:29.431 Program doxygen found: YES (/usr/local/bin/doxygen) 00:09:29.431 Configuring doxy-api-html.conf using configuration 00:09:29.431 Configuring doxy-api-man.conf using configuration 00:09:29.431 Program mandb found: YES (/usr/bin/mandb) 00:09:29.431 Program sphinx-build found: NO 00:09:29.431 Configuring rte_build_config.h using configuration 00:09:29.431 Message: 00:09:29.431 ================= 00:09:29.431 Applications Enabled 00:09:29.431 ================= 00:09:29.431 00:09:29.431 apps: 00:09:29.431 00:09:29.431 00:09:29.431 Message: 00:09:29.432 ================= 00:09:29.432 Libraries Enabled 00:09:29.432 ================= 00:09:29.432 00:09:29.432 libs: 00:09:29.432 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:29.432 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:29.432 cryptodev, dmadev, power, reorder, security, vhost, 00:09:29.432 00:09:29.432 Message: 00:09:29.432 =============== 00:09:29.432 Drivers Enabled 00:09:29.432 =============== 00:09:29.432 00:09:29.432 common: 00:09:29.432 00:09:29.432 bus: 00:09:29.432 pci, vdev, 00:09:29.432 mempool: 00:09:29.432 ring, 00:09:29.432 dma: 00:09:29.432 00:09:29.432 net: 00:09:29.432 00:09:29.432 crypto: 00:09:29.432 00:09:29.432 compress: 00:09:29.432 00:09:29.432 vdpa: 00:09:29.432 00:09:29.432 00:09:29.432 Message: 00:09:29.432 ================= 00:09:29.432 Content Skipped 00:09:29.432 ================= 00:09:29.432 00:09:29.432 apps: 00:09:29.432 dumpcap: explicitly disabled via build config 00:09:29.432 graph: explicitly disabled via build config 00:09:29.432 pdump: explicitly disabled via build config 00:09:29.432 proc-info: explicitly disabled via build config 00:09:29.432 test-acl: explicitly disabled via build config 00:09:29.432 test-bbdev: explicitly disabled via build config 00:09:29.432 test-cmdline: explicitly disabled via build config 00:09:29.432 test-compress-perf: explicitly disabled via build config 00:09:29.432 test-crypto-perf: explicitly disabled via build config 00:09:29.432 test-dma-perf: explicitly disabled via build config 00:09:29.432 test-eventdev: explicitly disabled via build config 00:09:29.432 test-fib: explicitly disabled via build config 00:09:29.432 test-flow-perf: explicitly disabled via build config 00:09:29.432 test-gpudev: explicitly disabled via build config 00:09:29.432 test-mldev: explicitly disabled via build config 00:09:29.432 test-pipeline: explicitly disabled via build config 00:09:29.432 test-pmd: explicitly disabled via build config 00:09:29.432 test-regex: explicitly disabled via build config 00:09:29.432 test-sad: explicitly disabled via build config 00:09:29.432 test-security-perf: explicitly disabled via build config 00:09:29.432 00:09:29.432 libs: 00:09:29.432 argparse: explicitly disabled via build config 00:09:29.432 metrics: explicitly disabled via build config 00:09:29.432 acl: explicitly disabled via build config 00:09:29.432 bbdev: explicitly disabled via build config 00:09:29.432 bitratestats: explicitly disabled via build config 00:09:29.432 bpf: explicitly disabled via build config 00:09:29.432 cfgfile: explicitly disabled via build config 00:09:29.432 distributor: explicitly disabled via build config 00:09:29.432 efd: explicitly disabled via build config 00:09:29.432 eventdev: explicitly disabled via build config 00:09:29.432 dispatcher: explicitly disabled via build config 00:09:29.432 gpudev: explicitly disabled via build config 00:09:29.432 gro: explicitly disabled via build config 00:09:29.432 gso: explicitly disabled via build config 00:09:29.432 ip_frag: explicitly disabled via build config 00:09:29.432 jobstats: explicitly disabled via build config 00:09:29.432 latencystats: explicitly disabled via build config 00:09:29.432 lpm: explicitly disabled via build config 00:09:29.432 member: explicitly disabled via build config 00:09:29.432 pcapng: explicitly disabled via build config 00:09:29.432 rawdev: explicitly disabled via build config 00:09:29.432 regexdev: explicitly disabled via build config 00:09:29.432 mldev: explicitly disabled via build config 00:09:29.432 rib: explicitly disabled via build config 00:09:29.432 sched: explicitly disabled via build config 00:09:29.432 stack: explicitly disabled via build config 00:09:29.432 ipsec: explicitly disabled via build config 00:09:29.432 pdcp: explicitly disabled via build config 00:09:29.432 fib: explicitly disabled via build config 00:09:29.432 port: explicitly disabled via build config 00:09:29.432 pdump: explicitly disabled via build config 00:09:29.432 table: explicitly disabled via build config 00:09:29.432 pipeline: explicitly disabled via build config 00:09:29.432 graph: explicitly disabled via build config 00:09:29.432 node: explicitly disabled via build config 00:09:29.432 00:09:29.432 drivers: 00:09:29.432 common/cpt: not in enabled drivers build config 00:09:29.432 common/dpaax: not in enabled drivers build config 00:09:29.432 common/iavf: not in enabled drivers build config 00:09:29.432 common/idpf: not in enabled drivers build config 00:09:29.432 common/ionic: not in enabled drivers build config 00:09:29.432 common/mvep: not in enabled drivers build config 00:09:29.432 common/octeontx: not in enabled drivers build config 00:09:29.432 bus/auxiliary: not in enabled drivers build config 00:09:29.432 bus/cdx: not in enabled drivers build config 00:09:29.432 bus/dpaa: not in enabled drivers build config 00:09:29.432 bus/fslmc: not in enabled drivers build config 00:09:29.432 bus/ifpga: not in enabled drivers build config 00:09:29.432 bus/platform: not in enabled drivers build config 00:09:29.432 bus/uacce: not in enabled drivers build config 00:09:29.432 bus/vmbus: not in enabled drivers build config 00:09:29.432 common/cnxk: not in enabled drivers build config 00:09:29.432 common/mlx5: not in enabled drivers build config 00:09:29.432 common/nfp: not in enabled drivers build config 00:09:29.432 common/nitrox: not in enabled drivers build config 00:09:29.432 common/qat: not in enabled drivers build config 00:09:29.432 common/sfc_efx: not in enabled drivers build config 00:09:29.432 mempool/bucket: not in enabled drivers build config 00:09:29.432 mempool/cnxk: not in enabled drivers build config 00:09:29.432 mempool/dpaa: not in enabled drivers build config 00:09:29.432 mempool/dpaa2: not in enabled drivers build config 00:09:29.432 mempool/octeontx: not in enabled drivers build config 00:09:29.432 mempool/stack: not in enabled drivers build config 00:09:29.432 dma/cnxk: not in enabled drivers build config 00:09:29.432 dma/dpaa: not in enabled drivers build config 00:09:29.432 dma/dpaa2: not in enabled drivers build config 00:09:29.432 dma/hisilicon: not in enabled drivers build config 00:09:29.432 dma/idxd: not in enabled drivers build config 00:09:29.432 dma/ioat: not in enabled drivers build config 00:09:29.432 dma/skeleton: not in enabled drivers build config 00:09:29.432 net/af_packet: not in enabled drivers build config 00:09:29.432 net/af_xdp: not in enabled drivers build config 00:09:29.432 net/ark: not in enabled drivers build config 00:09:29.432 net/atlantic: not in enabled drivers build config 00:09:29.432 net/avp: not in enabled drivers build config 00:09:29.432 net/axgbe: not in enabled drivers build config 00:09:29.432 net/bnx2x: not in enabled drivers build config 00:09:29.432 net/bnxt: not in enabled drivers build config 00:09:29.432 net/bonding: not in enabled drivers build config 00:09:29.432 net/cnxk: not in enabled drivers build config 00:09:29.432 net/cpfl: not in enabled drivers build config 00:09:29.432 net/cxgbe: not in enabled drivers build config 00:09:29.432 net/dpaa: not in enabled drivers build config 00:09:29.432 net/dpaa2: not in enabled drivers build config 00:09:29.432 net/e1000: not in enabled drivers build config 00:09:29.432 net/ena: not in enabled drivers build config 00:09:29.432 net/enetc: not in enabled drivers build config 00:09:29.432 net/enetfec: not in enabled drivers build config 00:09:29.432 net/enic: not in enabled drivers build config 00:09:29.432 net/failsafe: not in enabled drivers build config 00:09:29.432 net/fm10k: not in enabled drivers build config 00:09:29.432 net/gve: not in enabled drivers build config 00:09:29.432 net/hinic: not in enabled drivers build config 00:09:29.432 net/hns3: not in enabled drivers build config 00:09:29.432 net/i40e: not in enabled drivers build config 00:09:29.432 net/iavf: not in enabled drivers build config 00:09:29.432 net/ice: not in enabled drivers build config 00:09:29.432 net/idpf: not in enabled drivers build config 00:09:29.432 net/igc: not in enabled drivers build config 00:09:29.432 net/ionic: not in enabled drivers build config 00:09:29.432 net/ipn3ke: not in enabled drivers build config 00:09:29.432 net/ixgbe: not in enabled drivers build config 00:09:29.432 net/mana: not in enabled drivers build config 00:09:29.432 net/memif: not in enabled drivers build config 00:09:29.432 net/mlx4: not in enabled drivers build config 00:09:29.432 net/mlx5: not in enabled drivers build config 00:09:29.432 net/mvneta: not in enabled drivers build config 00:09:29.432 net/mvpp2: not in enabled drivers build config 00:09:29.432 net/netvsc: not in enabled drivers build config 00:09:29.432 net/nfb: not in enabled drivers build config 00:09:29.432 net/nfp: not in enabled drivers build config 00:09:29.432 net/ngbe: not in enabled drivers build config 00:09:29.432 net/null: not in enabled drivers build config 00:09:29.432 net/octeontx: not in enabled drivers build config 00:09:29.432 net/octeon_ep: not in enabled drivers build config 00:09:29.432 net/pcap: not in enabled drivers build config 00:09:29.432 net/pfe: not in enabled drivers build config 00:09:29.432 net/qede: not in enabled drivers build config 00:09:29.432 net/ring: not in enabled drivers build config 00:09:29.432 net/sfc: not in enabled drivers build config 00:09:29.432 net/softnic: not in enabled drivers build config 00:09:29.432 net/tap: not in enabled drivers build config 00:09:29.432 net/thunderx: not in enabled drivers build config 00:09:29.432 net/txgbe: not in enabled drivers build config 00:09:29.432 net/vdev_netvsc: not in enabled drivers build config 00:09:29.432 net/vhost: not in enabled drivers build config 00:09:29.432 net/virtio: not in enabled drivers build config 00:09:29.432 net/vmxnet3: not in enabled drivers build config 00:09:29.432 raw/*: missing internal dependency, "rawdev" 00:09:29.432 crypto/armv8: not in enabled drivers build config 00:09:29.432 crypto/bcmfs: not in enabled drivers build config 00:09:29.432 crypto/caam_jr: not in enabled drivers build config 00:09:29.432 crypto/ccp: not in enabled drivers build config 00:09:29.432 crypto/cnxk: not in enabled drivers build config 00:09:29.432 crypto/dpaa_sec: not in enabled drivers build config 00:09:29.432 crypto/dpaa2_sec: not in enabled drivers build config 00:09:29.432 crypto/ipsec_mb: not in enabled drivers build config 00:09:29.432 crypto/mlx5: not in enabled drivers build config 00:09:29.432 crypto/mvsam: not in enabled drivers build config 00:09:29.432 crypto/nitrox: not in enabled drivers build config 00:09:29.433 crypto/null: not in enabled drivers build config 00:09:29.433 crypto/octeontx: not in enabled drivers build config 00:09:29.433 crypto/openssl: not in enabled drivers build config 00:09:29.433 crypto/scheduler: not in enabled drivers build config 00:09:29.433 crypto/uadk: not in enabled drivers build config 00:09:29.433 crypto/virtio: not in enabled drivers build config 00:09:29.433 compress/isal: not in enabled drivers build config 00:09:29.433 compress/mlx5: not in enabled drivers build config 00:09:29.433 compress/nitrox: not in enabled drivers build config 00:09:29.433 compress/octeontx: not in enabled drivers build config 00:09:29.433 compress/zlib: not in enabled drivers build config 00:09:29.433 regex/*: missing internal dependency, "regexdev" 00:09:29.433 ml/*: missing internal dependency, "mldev" 00:09:29.433 vdpa/ifc: not in enabled drivers build config 00:09:29.433 vdpa/mlx5: not in enabled drivers build config 00:09:29.433 vdpa/nfp: not in enabled drivers build config 00:09:29.433 vdpa/sfc: not in enabled drivers build config 00:09:29.433 event/*: missing internal dependency, "eventdev" 00:09:29.433 baseband/*: missing internal dependency, "bbdev" 00:09:29.433 gpu/*: missing internal dependency, "gpudev" 00:09:29.433 00:09:29.433 00:09:29.433 Build targets in project: 85 00:09:29.433 00:09:29.433 DPDK 24.03.0 00:09:29.433 00:09:29.433 User defined options 00:09:29.433 buildtype : debug 00:09:29.433 default_library : shared 00:09:29.433 libdir : lib 00:09:29.433 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:29.433 b_sanitize : address 00:09:29.433 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:29.433 c_link_args : 00:09:29.433 cpu_instruction_set: native 00:09:29.433 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:29.433 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:29.433 enable_docs : false 00:09:29.433 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:29.433 enable_kmods : false 00:09:29.433 max_lcores : 128 00:09:29.433 tests : false 00:09:29.433 00:09:29.433 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:29.433 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:29.692 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:29.692 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:29.692 [3/268] Linking static target lib/librte_log.a 00:09:29.692 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:29.692 [5/268] Linking static target lib/librte_kvargs.a 00:09:29.692 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:29.969 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:29.969 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:29.969 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:29.969 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:29.969 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:30.228 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:30.228 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:30.228 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:30.228 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:30.228 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:30.228 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:30.228 [18/268] Linking static target lib/librte_telemetry.a 00:09:30.486 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:30.486 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:30.748 [21/268] Linking target lib/librte_log.so.24.1 00:09:30.748 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:30.748 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:30.748 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:30.748 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:30.748 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:30.748 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:30.748 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:31.009 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:09:31.009 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:31.009 [31/268] Linking target lib/librte_kvargs.so.24.1 00:09:31.009 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:31.267 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:09:31.267 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:31.267 [35/268] Linking target lib/librte_telemetry.so.24.1 00:09:31.267 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:31.267 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:31.267 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:31.267 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:31.526 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:31.526 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:31.526 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:31.526 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:31.526 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:31.526 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:09:31.526 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:31.786 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:31.786 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:31.786 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:31.786 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:32.045 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:32.045 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:32.045 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:32.045 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:32.045 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:32.045 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:32.304 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:32.304 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:32.304 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:32.304 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:32.304 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:32.304 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:32.304 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:32.304 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:32.562 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:32.562 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:32.562 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:32.562 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:32.821 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:32.821 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:32.821 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:32.821 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:32.821 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:32.821 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:32.821 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:32.821 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:33.080 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:33.080 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:33.080 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:33.080 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:33.080 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:33.080 [82/268] Linking static target lib/librte_ring.a 00:09:33.339 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:33.339 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:33.339 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:33.339 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:33.339 [87/268] Linking static target lib/librte_eal.a 00:09:33.631 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:33.631 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:33.631 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:33.631 [91/268] Linking static target lib/librte_rcu.a 00:09:33.631 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:33.631 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:33.631 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:33.631 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:33.631 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:33.631 [97/268] Linking static target lib/librte_mempool.a 00:09:34.201 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:34.201 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:34.201 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:34.201 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:34.201 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:34.201 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:34.201 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:34.201 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:34.460 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:34.460 [107/268] Linking static target lib/librte_net.a 00:09:34.460 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:34.460 [109/268] Linking static target lib/librte_meter.a 00:09:34.720 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:34.720 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:34.720 [112/268] Linking static target lib/librte_mbuf.a 00:09:34.720 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:34.720 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:34.979 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:34.979 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:34.979 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:34.979 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:34.979 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:35.240 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:35.505 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:35.765 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:35.765 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:35.765 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:35.765 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:35.765 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:35.765 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:36.023 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:36.023 [129/268] Linking static target lib/librte_pci.a 00:09:36.023 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:36.023 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:36.023 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:36.023 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:36.023 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:09:36.023 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:36.283 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:36.283 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:36.283 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:36.283 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:36.283 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:36.283 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:36.283 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:36.283 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:36.283 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:09:36.283 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:36.283 [146/268] Linking static target lib/librte_cmdline.a 00:09:36.542 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:36.542 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:09:36.801 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:36.801 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:36.801 [151/268] Linking static target lib/librte_timer.a 00:09:36.801 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:36.801 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:37.065 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:37.065 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:37.341 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:37.341 [157/268] Linking static target lib/librte_ethdev.a 00:09:37.341 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:37.341 [159/268] Linking static target lib/librte_compressdev.a 00:09:37.341 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:37.341 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:37.600 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:37.601 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:37.601 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:37.601 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:37.859 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:37.859 [167/268] Linking static target lib/librte_dmadev.a 00:09:37.859 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:37.859 [169/268] Linking static target lib/librte_hash.a 00:09:38.119 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:38.119 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:38.119 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:38.119 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:38.119 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:38.119 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:38.379 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:38.379 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:38.638 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:38.638 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:38.638 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:38.638 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:38.638 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:38.638 [183/268] Linking static target lib/librte_power.a 00:09:38.898 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:38.898 [185/268] Linking static target lib/librte_cryptodev.a 00:09:38.898 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.158 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:39.158 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:39.158 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:39.158 [190/268] Linking static target lib/librte_security.a 00:09:39.158 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:39.158 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:39.158 [193/268] Linking static target lib/librte_reorder.a 00:09:39.726 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:39.726 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.986 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.986 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.986 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:40.245 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:40.245 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:40.506 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:40.506 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:40.506 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:40.765 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:40.765 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:40.765 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:40.765 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:40.765 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:40.765 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:40.765 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:41.025 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:41.025 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:41.025 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:41.025 [214/268] Linking static target drivers/librte_bus_vdev.a 00:09:41.300 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:41.300 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:41.300 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:41.300 [218/268] Linking static target drivers/librte_bus_pci.a 00:09:41.300 [219/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.300 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:41.300 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:41.560 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.560 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:41.560 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:41.560 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:41.560 [226/268] Linking static target drivers/librte_mempool_ring.a 00:09:41.819 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:42.388 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:45.679 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:45.679 [230/268] Linking static target lib/librte_vhost.a 00:09:45.946 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:45.946 [232/268] Linking target lib/librte_eal.so.24.1 00:09:46.217 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:09:46.217 [234/268] Linking target lib/librte_ring.so.24.1 00:09:46.217 [235/268] Linking target lib/librte_dmadev.so.24.1 00:09:46.217 [236/268] Linking target lib/librte_meter.so.24.1 00:09:46.217 [237/268] Linking target lib/librte_timer.so.24.1 00:09:46.217 [238/268] Linking target lib/librte_pci.so.24.1 00:09:46.217 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:09:46.217 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:09:46.218 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:09:46.218 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:09:46.218 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:09:46.218 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:09:46.218 [245/268] Linking target lib/librte_rcu.so.24.1 00:09:46.218 [246/268] Linking target lib/librte_mempool.so.24.1 00:09:46.477 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:09:46.477 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:09:46.477 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:09:46.477 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:46.477 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:09:46.477 [252/268] Linking target lib/librte_mbuf.so.24.1 00:09:46.737 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:09:46.737 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:09:46.737 [255/268] Linking target lib/librte_reorder.so.24.1 00:09:46.737 [256/268] Linking target lib/librte_net.so.24.1 00:09:46.737 [257/268] Linking target lib/librte_compressdev.so.24.1 00:09:46.737 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:09:46.737 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:09:46.737 [260/268] Linking target lib/librte_hash.so.24.1 00:09:46.737 [261/268] Linking target lib/librte_security.so.24.1 00:09:46.737 [262/268] Linking target lib/librte_cmdline.so.24.1 00:09:46.995 [263/268] Linking target lib/librte_ethdev.so.24.1 00:09:46.995 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:09:46.995 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:09:46.996 [266/268] Linking target lib/librte_power.so.24.1 00:09:47.932 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:47.932 [268/268] Linking target lib/librte_vhost.so.24.1 00:09:47.932 INFO: autodetecting backend as ninja 00:09:47.932 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:10:06.032 CC lib/ut/ut.o 00:10:06.032 CC lib/ut_mock/mock.o 00:10:06.032 CC lib/log/log.o 00:10:06.032 CC lib/log/log_deprecated.o 00:10:06.032 CC lib/log/log_flags.o 00:10:06.032 LIB libspdk_ut_mock.a 00:10:06.032 LIB libspdk_ut.a 00:10:06.032 SO libspdk_ut_mock.so.6.0 00:10:06.032 LIB libspdk_log.a 00:10:06.032 SO libspdk_ut.so.2.0 00:10:06.032 SO libspdk_log.so.7.1 00:10:06.032 SYMLINK libspdk_ut_mock.so 00:10:06.032 SYMLINK libspdk_ut.so 00:10:06.032 SYMLINK libspdk_log.so 00:10:06.032 CC lib/ioat/ioat.o 00:10:06.032 CXX lib/trace_parser/trace.o 00:10:06.032 CC lib/dma/dma.o 00:10:06.032 CC lib/util/crc32c.o 00:10:06.032 CC lib/util/crc32.o 00:10:06.032 CC lib/util/base64.o 00:10:06.032 CC lib/util/bit_array.o 00:10:06.032 CC lib/util/cpuset.o 00:10:06.032 CC lib/util/crc16.o 00:10:06.032 CC lib/vfio_user/host/vfio_user_pci.o 00:10:06.032 CC lib/vfio_user/host/vfio_user.o 00:10:06.032 CC lib/util/crc32_ieee.o 00:10:06.032 CC lib/util/crc64.o 00:10:06.032 LIB libspdk_dma.a 00:10:06.032 CC lib/util/dif.o 00:10:06.032 SO libspdk_dma.so.5.0 00:10:06.032 LIB libspdk_ioat.a 00:10:06.032 CC lib/util/fd.o 00:10:06.033 SO libspdk_ioat.so.7.0 00:10:06.033 CC lib/util/fd_group.o 00:10:06.033 SYMLINK libspdk_dma.so 00:10:06.033 CC lib/util/file.o 00:10:06.033 CC lib/util/hexlify.o 00:10:06.033 CC lib/util/iov.o 00:10:06.033 SYMLINK libspdk_ioat.so 00:10:06.033 CC lib/util/math.o 00:10:06.033 CC lib/util/net.o 00:10:06.033 LIB libspdk_vfio_user.a 00:10:06.033 CC lib/util/pipe.o 00:10:06.033 SO libspdk_vfio_user.so.5.0 00:10:06.033 CC lib/util/strerror_tls.o 00:10:06.033 CC lib/util/string.o 00:10:06.033 SYMLINK libspdk_vfio_user.so 00:10:06.033 CC lib/util/uuid.o 00:10:06.033 CC lib/util/xor.o 00:10:06.033 CC lib/util/zipf.o 00:10:06.033 CC lib/util/md5.o 00:10:06.033 LIB libspdk_util.a 00:10:06.033 SO libspdk_util.so.10.1 00:10:06.033 LIB libspdk_trace_parser.a 00:10:06.033 SO libspdk_trace_parser.so.6.0 00:10:06.291 SYMLINK libspdk_util.so 00:10:06.291 SYMLINK libspdk_trace_parser.so 00:10:06.548 CC lib/env_dpdk/env.o 00:10:06.548 CC lib/env_dpdk/memory.o 00:10:06.548 CC lib/env_dpdk/init.o 00:10:06.549 CC lib/env_dpdk/pci.o 00:10:06.549 CC lib/env_dpdk/threads.o 00:10:06.549 CC lib/rdma_utils/rdma_utils.o 00:10:06.549 CC lib/json/json_parse.o 00:10:06.549 CC lib/conf/conf.o 00:10:06.549 CC lib/vmd/vmd.o 00:10:06.549 CC lib/idxd/idxd.o 00:10:06.549 CC lib/vmd/led.o 00:10:06.806 LIB libspdk_conf.a 00:10:06.806 CC lib/json/json_util.o 00:10:06.806 SO libspdk_conf.so.6.0 00:10:06.806 LIB libspdk_rdma_utils.a 00:10:06.806 SO libspdk_rdma_utils.so.1.0 00:10:06.806 SYMLINK libspdk_conf.so 00:10:06.806 CC lib/idxd/idxd_user.o 00:10:06.806 CC lib/idxd/idxd_kernel.o 00:10:06.806 CC lib/json/json_write.o 00:10:06.806 SYMLINK libspdk_rdma_utils.so 00:10:06.806 CC lib/env_dpdk/pci_ioat.o 00:10:06.806 CC lib/env_dpdk/pci_virtio.o 00:10:06.806 CC lib/env_dpdk/pci_vmd.o 00:10:07.063 CC lib/env_dpdk/pci_idxd.o 00:10:07.063 CC lib/env_dpdk/pci_event.o 00:10:07.063 CC lib/env_dpdk/sigbus_handler.o 00:10:07.063 CC lib/env_dpdk/pci_dpdk.o 00:10:07.063 CC lib/rdma_provider/common.o 00:10:07.063 CC lib/rdma_provider/rdma_provider_verbs.o 00:10:07.063 LIB libspdk_json.a 00:10:07.063 CC lib/env_dpdk/pci_dpdk_2207.o 00:10:07.063 LIB libspdk_idxd.a 00:10:07.063 SO libspdk_json.so.6.0 00:10:07.063 CC lib/env_dpdk/pci_dpdk_2211.o 00:10:07.063 SO libspdk_idxd.so.12.1 00:10:07.063 LIB libspdk_vmd.a 00:10:07.322 SYMLINK libspdk_json.so 00:10:07.322 SO libspdk_vmd.so.6.0 00:10:07.322 SYMLINK libspdk_idxd.so 00:10:07.322 SYMLINK libspdk_vmd.so 00:10:07.322 LIB libspdk_rdma_provider.a 00:10:07.322 SO libspdk_rdma_provider.so.7.0 00:10:07.582 SYMLINK libspdk_rdma_provider.so 00:10:07.582 CC lib/jsonrpc/jsonrpc_server.o 00:10:07.582 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:10:07.582 CC lib/jsonrpc/jsonrpc_client.o 00:10:07.582 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:10:07.840 LIB libspdk_jsonrpc.a 00:10:07.840 SO libspdk_jsonrpc.so.6.0 00:10:07.840 SYMLINK libspdk_jsonrpc.so 00:10:08.099 LIB libspdk_env_dpdk.a 00:10:08.358 SO libspdk_env_dpdk.so.15.1 00:10:08.358 CC lib/rpc/rpc.o 00:10:08.358 SYMLINK libspdk_env_dpdk.so 00:10:08.617 LIB libspdk_rpc.a 00:10:08.617 SO libspdk_rpc.so.6.0 00:10:08.617 SYMLINK libspdk_rpc.so 00:10:09.185 CC lib/keyring/keyring.o 00:10:09.185 CC lib/notify/notify.o 00:10:09.185 CC lib/keyring/keyring_rpc.o 00:10:09.185 CC lib/notify/notify_rpc.o 00:10:09.185 CC lib/trace/trace.o 00:10:09.185 CC lib/trace/trace_flags.o 00:10:09.185 CC lib/trace/trace_rpc.o 00:10:09.185 LIB libspdk_notify.a 00:10:09.185 SO libspdk_notify.so.6.0 00:10:09.185 LIB libspdk_keyring.a 00:10:09.444 SO libspdk_keyring.so.2.0 00:10:09.444 SYMLINK libspdk_notify.so 00:10:09.444 LIB libspdk_trace.a 00:10:09.444 SYMLINK libspdk_keyring.so 00:10:09.444 SO libspdk_trace.so.11.0 00:10:09.444 SYMLINK libspdk_trace.so 00:10:10.019 CC lib/sock/sock_rpc.o 00:10:10.019 CC lib/sock/sock.o 00:10:10.019 CC lib/thread/thread.o 00:10:10.019 CC lib/thread/iobuf.o 00:10:10.278 LIB libspdk_sock.a 00:10:10.278 SO libspdk_sock.so.10.0 00:10:10.653 SYMLINK libspdk_sock.so 00:10:10.914 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:10.914 CC lib/nvme/nvme_ns_cmd.o 00:10:10.914 CC lib/nvme/nvme_ctrlr.o 00:10:10.914 CC lib/nvme/nvme_fabric.o 00:10:10.914 CC lib/nvme/nvme_pcie.o 00:10:10.914 CC lib/nvme/nvme_ns.o 00:10:10.914 CC lib/nvme/nvme_pcie_common.o 00:10:10.914 CC lib/nvme/nvme.o 00:10:10.914 CC lib/nvme/nvme_qpair.o 00:10:11.483 CC lib/nvme/nvme_quirks.o 00:10:11.484 CC lib/nvme/nvme_transport.o 00:10:11.484 CC lib/nvme/nvme_discovery.o 00:10:11.742 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:11.742 LIB libspdk_thread.a 00:10:11.742 SO libspdk_thread.so.11.0 00:10:11.742 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:11.742 CC lib/nvme/nvme_tcp.o 00:10:11.742 SYMLINK libspdk_thread.so 00:10:11.742 CC lib/nvme/nvme_opal.o 00:10:12.000 CC lib/accel/accel.o 00:10:12.000 CC lib/nvme/nvme_io_msg.o 00:10:12.000 CC lib/nvme/nvme_poll_group.o 00:10:12.260 CC lib/nvme/nvme_zns.o 00:10:12.260 CC lib/nvme/nvme_stubs.o 00:10:12.260 CC lib/nvme/nvme_auth.o 00:10:12.260 CC lib/blob/blobstore.o 00:10:12.260 CC lib/nvme/nvme_cuse.o 00:10:12.520 CC lib/nvme/nvme_rdma.o 00:10:12.520 CC lib/blob/request.o 00:10:12.779 CC lib/blob/zeroes.o 00:10:12.779 CC lib/blob/blob_bs_dev.o 00:10:13.037 CC lib/accel/accel_rpc.o 00:10:13.037 CC lib/accel/accel_sw.o 00:10:13.037 CC lib/init/json_config.o 00:10:13.295 CC lib/init/subsystem.o 00:10:13.295 CC lib/init/subsystem_rpc.o 00:10:13.295 CC lib/init/rpc.o 00:10:13.295 CC lib/virtio/virtio.o 00:10:13.295 LIB libspdk_accel.a 00:10:13.295 CC lib/virtio/virtio_vhost_user.o 00:10:13.295 CC lib/fsdev/fsdev.o 00:10:13.295 SO libspdk_accel.so.16.0 00:10:13.554 CC lib/virtio/virtio_vfio_user.o 00:10:13.554 CC lib/virtio/virtio_pci.o 00:10:13.554 LIB libspdk_init.a 00:10:13.554 SYMLINK libspdk_accel.so 00:10:13.554 SO libspdk_init.so.6.0 00:10:13.554 CC lib/fsdev/fsdev_io.o 00:10:13.554 SYMLINK libspdk_init.so 00:10:13.554 CC lib/fsdev/fsdev_rpc.o 00:10:13.554 CC lib/bdev/bdev.o 00:10:13.831 CC lib/bdev/bdev_rpc.o 00:10:13.831 CC lib/bdev/bdev_zone.o 00:10:13.831 CC lib/event/app.o 00:10:13.831 LIB libspdk_virtio.a 00:10:13.831 CC lib/event/reactor.o 00:10:13.831 SO libspdk_virtio.so.7.0 00:10:13.831 SYMLINK libspdk_virtio.so 00:10:13.831 CC lib/bdev/part.o 00:10:13.831 CC lib/bdev/scsi_nvme.o 00:10:14.090 CC lib/event/log_rpc.o 00:10:14.090 LIB libspdk_nvme.a 00:10:14.090 CC lib/event/app_rpc.o 00:10:14.090 CC lib/event/scheduler_static.o 00:10:14.090 LIB libspdk_fsdev.a 00:10:14.090 SO libspdk_fsdev.so.2.0 00:10:14.349 SYMLINK libspdk_fsdev.so 00:10:14.349 SO libspdk_nvme.so.15.0 00:10:14.349 LIB libspdk_event.a 00:10:14.349 SO libspdk_event.so.14.0 00:10:14.349 SYMLINK libspdk_event.so 00:10:14.608 SYMLINK libspdk_nvme.so 00:10:14.608 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:10:15.175 LIB libspdk_fuse_dispatcher.a 00:10:15.433 SO libspdk_fuse_dispatcher.so.1.0 00:10:15.433 SYMLINK libspdk_fuse_dispatcher.so 00:10:16.000 LIB libspdk_blob.a 00:10:16.000 SO libspdk_blob.so.11.0 00:10:16.259 SYMLINK libspdk_blob.so 00:10:16.517 CC lib/lvol/lvol.o 00:10:16.517 CC lib/blobfs/blobfs.o 00:10:16.517 CC lib/blobfs/tree.o 00:10:16.782 LIB libspdk_bdev.a 00:10:16.782 SO libspdk_bdev.so.17.0 00:10:17.041 SYMLINK libspdk_bdev.so 00:10:17.299 CC lib/ublk/ublk_rpc.o 00:10:17.299 CC lib/scsi/dev.o 00:10:17.299 CC lib/scsi/lun.o 00:10:17.299 CC lib/ublk/ublk.o 00:10:17.299 CC lib/scsi/port.o 00:10:17.299 CC lib/nvmf/ctrlr.o 00:10:17.299 CC lib/nbd/nbd.o 00:10:17.299 CC lib/ftl/ftl_core.o 00:10:17.299 CC lib/ftl/ftl_init.o 00:10:17.299 CC lib/scsi/scsi.o 00:10:17.299 CC lib/scsi/scsi_bdev.o 00:10:17.558 CC lib/nvmf/ctrlr_discovery.o 00:10:17.558 CC lib/nvmf/ctrlr_bdev.o 00:10:17.558 LIB libspdk_blobfs.a 00:10:17.558 CC lib/ftl/ftl_layout.o 00:10:17.558 SO libspdk_blobfs.so.10.0 00:10:17.558 CC lib/scsi/scsi_pr.o 00:10:17.558 SYMLINK libspdk_blobfs.so 00:10:17.558 CC lib/nbd/nbd_rpc.o 00:10:17.558 LIB libspdk_lvol.a 00:10:17.558 CC lib/scsi/scsi_rpc.o 00:10:17.558 SO libspdk_lvol.so.10.0 00:10:17.816 SYMLINK libspdk_lvol.so 00:10:17.816 CC lib/scsi/task.o 00:10:17.816 CC lib/ftl/ftl_debug.o 00:10:17.816 LIB libspdk_nbd.a 00:10:17.816 SO libspdk_nbd.so.7.0 00:10:17.816 CC lib/ftl/ftl_io.o 00:10:17.816 SYMLINK libspdk_nbd.so 00:10:17.816 CC lib/ftl/ftl_sb.o 00:10:17.816 CC lib/ftl/ftl_l2p.o 00:10:17.816 CC lib/nvmf/subsystem.o 00:10:18.075 LIB libspdk_scsi.a 00:10:18.075 LIB libspdk_ublk.a 00:10:18.075 CC lib/nvmf/nvmf.o 00:10:18.075 CC lib/ftl/ftl_l2p_flat.o 00:10:18.075 SO libspdk_scsi.so.9.0 00:10:18.075 SO libspdk_ublk.so.3.0 00:10:18.075 CC lib/ftl/ftl_nv_cache.o 00:10:18.075 SYMLINK libspdk_scsi.so 00:10:18.075 CC lib/ftl/ftl_band.o 00:10:18.075 SYMLINK libspdk_ublk.so 00:10:18.075 CC lib/ftl/ftl_band_ops.o 00:10:18.334 CC lib/ftl/ftl_writer.o 00:10:18.335 CC lib/nvmf/nvmf_rpc.o 00:10:18.335 CC lib/vhost/vhost.o 00:10:18.335 CC lib/iscsi/conn.o 00:10:18.594 CC lib/iscsi/init_grp.o 00:10:18.594 CC lib/iscsi/iscsi.o 00:10:18.594 CC lib/ftl/ftl_rq.o 00:10:18.594 CC lib/vhost/vhost_rpc.o 00:10:18.854 CC lib/vhost/vhost_scsi.o 00:10:18.854 CC lib/ftl/ftl_reloc.o 00:10:18.854 CC lib/ftl/ftl_l2p_cache.o 00:10:18.854 CC lib/ftl/ftl_p2l.o 00:10:19.112 CC lib/ftl/ftl_p2l_log.o 00:10:19.112 CC lib/iscsi/param.o 00:10:19.371 CC lib/iscsi/portal_grp.o 00:10:19.371 CC lib/iscsi/tgt_node.o 00:10:19.371 CC lib/vhost/vhost_blk.o 00:10:19.371 CC lib/nvmf/transport.o 00:10:19.371 CC lib/nvmf/tcp.o 00:10:19.629 CC lib/nvmf/stubs.o 00:10:19.629 CC lib/ftl/mngt/ftl_mngt.o 00:10:19.629 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:19.629 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:19.629 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:19.629 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:19.629 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:19.886 CC lib/iscsi/iscsi_subsystem.o 00:10:19.886 CC lib/iscsi/iscsi_rpc.o 00:10:19.886 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:19.886 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:20.152 CC lib/iscsi/task.o 00:10:20.153 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:20.153 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:20.153 CC lib/vhost/rte_vhost_user.o 00:10:20.153 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:20.153 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:20.153 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:20.411 CC lib/ftl/utils/ftl_conf.o 00:10:20.411 CC lib/ftl/utils/ftl_md.o 00:10:20.411 LIB libspdk_iscsi.a 00:10:20.411 CC lib/ftl/utils/ftl_mempool.o 00:10:20.411 CC lib/ftl/utils/ftl_bitmap.o 00:10:20.411 CC lib/ftl/utils/ftl_property.o 00:10:20.411 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:20.411 SO libspdk_iscsi.so.8.0 00:10:20.411 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:20.669 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:20.669 CC lib/nvmf/mdns_server.o 00:10:20.669 SYMLINK libspdk_iscsi.so 00:10:20.669 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:20.669 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:20.669 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:20.669 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:10:20.669 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:20.669 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:20.669 CC lib/nvmf/rdma.o 00:10:20.926 CC lib/nvmf/auth.o 00:10:20.926 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:20.926 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:20.926 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:10:20.926 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:10:20.926 CC lib/ftl/base/ftl_base_dev.o 00:10:20.926 CC lib/ftl/base/ftl_base_bdev.o 00:10:21.184 CC lib/ftl/ftl_trace.o 00:10:21.184 LIB libspdk_vhost.a 00:10:21.184 SO libspdk_vhost.so.8.0 00:10:21.444 LIB libspdk_ftl.a 00:10:21.444 SYMLINK libspdk_vhost.so 00:10:21.702 SO libspdk_ftl.so.9.0 00:10:21.961 SYMLINK libspdk_ftl.so 00:10:23.335 LIB libspdk_nvmf.a 00:10:23.335 SO libspdk_nvmf.so.20.0 00:10:23.595 SYMLINK libspdk_nvmf.so 00:10:24.161 CC module/env_dpdk/env_dpdk_rpc.o 00:10:24.161 CC module/keyring/file/keyring.o 00:10:24.161 CC module/scheduler/gscheduler/gscheduler.o 00:10:24.161 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:24.161 CC module/accel/ioat/accel_ioat.o 00:10:24.161 CC module/accel/error/accel_error.o 00:10:24.161 CC module/sock/posix/posix.o 00:10:24.161 CC module/blob/bdev/blob_bdev.o 00:10:24.161 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:24.161 CC module/fsdev/aio/fsdev_aio.o 00:10:24.161 LIB libspdk_env_dpdk_rpc.a 00:10:24.161 SO libspdk_env_dpdk_rpc.so.6.0 00:10:24.161 SYMLINK libspdk_env_dpdk_rpc.so 00:10:24.161 CC module/accel/ioat/accel_ioat_rpc.o 00:10:24.161 CC module/keyring/file/keyring_rpc.o 00:10:24.161 LIB libspdk_scheduler_gscheduler.a 00:10:24.161 LIB libspdk_scheduler_dpdk_governor.a 00:10:24.161 SO libspdk_scheduler_gscheduler.so.4.0 00:10:24.161 CC module/accel/error/accel_error_rpc.o 00:10:24.419 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:24.419 LIB libspdk_scheduler_dynamic.a 00:10:24.419 SO libspdk_scheduler_dynamic.so.4.0 00:10:24.419 SYMLINK libspdk_scheduler_gscheduler.so 00:10:24.419 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:24.419 LIB libspdk_accel_ioat.a 00:10:24.419 CC module/fsdev/aio/fsdev_aio_rpc.o 00:10:24.419 LIB libspdk_keyring_file.a 00:10:24.419 SYMLINK libspdk_scheduler_dynamic.so 00:10:24.419 SO libspdk_accel_ioat.so.6.0 00:10:24.419 SO libspdk_keyring_file.so.2.0 00:10:24.419 LIB libspdk_blob_bdev.a 00:10:24.419 LIB libspdk_accel_error.a 00:10:24.419 SO libspdk_blob_bdev.so.11.0 00:10:24.419 CC module/accel/dsa/accel_dsa.o 00:10:24.419 SYMLINK libspdk_accel_ioat.so 00:10:24.419 SYMLINK libspdk_keyring_file.so 00:10:24.419 CC module/accel/dsa/accel_dsa_rpc.o 00:10:24.419 SO libspdk_accel_error.so.2.0 00:10:24.419 CC module/fsdev/aio/linux_aio_mgr.o 00:10:24.419 CC module/accel/iaa/accel_iaa.o 00:10:24.419 SYMLINK libspdk_blob_bdev.so 00:10:24.419 CC module/keyring/linux/keyring.o 00:10:24.692 SYMLINK libspdk_accel_error.so 00:10:24.692 CC module/keyring/linux/keyring_rpc.o 00:10:24.692 CC module/accel/iaa/accel_iaa_rpc.o 00:10:24.692 LIB libspdk_keyring_linux.a 00:10:24.692 SO libspdk_keyring_linux.so.1.0 00:10:24.692 LIB libspdk_accel_dsa.a 00:10:24.692 LIB libspdk_accel_iaa.a 00:10:24.692 CC module/bdev/delay/vbdev_delay.o 00:10:24.692 CC module/blobfs/bdev/blobfs_bdev.o 00:10:24.692 SYMLINK libspdk_keyring_linux.so 00:10:24.692 SO libspdk_accel_iaa.so.3.0 00:10:24.950 SO libspdk_accel_dsa.so.5.0 00:10:24.950 CC module/bdev/error/vbdev_error.o 00:10:24.950 CC module/bdev/gpt/gpt.o 00:10:24.950 SYMLINK libspdk_accel_iaa.so 00:10:24.950 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:24.950 SYMLINK libspdk_accel_dsa.so 00:10:24.950 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:24.950 LIB libspdk_sock_posix.a 00:10:24.950 LIB libspdk_fsdev_aio.a 00:10:24.950 CC module/bdev/lvol/vbdev_lvol.o 00:10:24.950 SO libspdk_sock_posix.so.6.0 00:10:24.950 SO libspdk_fsdev_aio.so.1.0 00:10:24.950 CC module/bdev/malloc/bdev_malloc.o 00:10:24.950 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:24.950 SYMLINK libspdk_fsdev_aio.so 00:10:25.208 SYMLINK libspdk_sock_posix.so 00:10:25.208 CC module/bdev/gpt/vbdev_gpt.o 00:10:25.208 LIB libspdk_blobfs_bdev.a 00:10:25.208 SO libspdk_blobfs_bdev.so.6.0 00:10:25.208 CC module/bdev/error/vbdev_error_rpc.o 00:10:25.208 LIB libspdk_bdev_delay.a 00:10:25.208 SYMLINK libspdk_blobfs_bdev.so 00:10:25.208 SO libspdk_bdev_delay.so.6.0 00:10:25.208 CC module/bdev/null/bdev_null.o 00:10:25.208 CC module/bdev/nvme/bdev_nvme.o 00:10:25.208 CC module/bdev/passthru/vbdev_passthru.o 00:10:25.208 SYMLINK libspdk_bdev_delay.so 00:10:25.466 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:25.466 LIB libspdk_bdev_error.a 00:10:25.466 CC module/bdev/raid/bdev_raid.o 00:10:25.466 LIB libspdk_bdev_gpt.a 00:10:25.466 CC module/bdev/split/vbdev_split.o 00:10:25.466 SO libspdk_bdev_error.so.6.0 00:10:25.466 LIB libspdk_bdev_malloc.a 00:10:25.466 SO libspdk_bdev_gpt.so.6.0 00:10:25.466 SO libspdk_bdev_malloc.so.6.0 00:10:25.466 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:25.466 SYMLINK libspdk_bdev_error.so 00:10:25.466 SYMLINK libspdk_bdev_gpt.so 00:10:25.466 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:25.466 CC module/bdev/nvme/nvme_rpc.o 00:10:25.466 CC module/bdev/nvme/bdev_mdns_client.o 00:10:25.466 SYMLINK libspdk_bdev_malloc.so 00:10:25.466 CC module/bdev/nvme/vbdev_opal.o 00:10:25.724 CC module/bdev/null/bdev_null_rpc.o 00:10:25.724 LIB libspdk_bdev_passthru.a 00:10:25.724 CC module/bdev/split/vbdev_split_rpc.o 00:10:25.724 SO libspdk_bdev_passthru.so.6.0 00:10:25.724 SYMLINK libspdk_bdev_passthru.so 00:10:25.724 LIB libspdk_bdev_null.a 00:10:25.724 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:25.724 LIB libspdk_bdev_split.a 00:10:25.724 SO libspdk_bdev_null.so.6.0 00:10:25.724 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:25.982 SO libspdk_bdev_split.so.6.0 00:10:25.982 LIB libspdk_bdev_lvol.a 00:10:25.982 CC module/bdev/aio/bdev_aio.o 00:10:25.982 SYMLINK libspdk_bdev_null.so 00:10:25.982 CC module/bdev/aio/bdev_aio_rpc.o 00:10:25.982 SO libspdk_bdev_lvol.so.6.0 00:10:25.982 CC module/bdev/ftl/bdev_ftl.o 00:10:25.982 SYMLINK libspdk_bdev_split.so 00:10:25.982 SYMLINK libspdk_bdev_lvol.so 00:10:25.982 CC module/bdev/raid/bdev_raid_rpc.o 00:10:25.982 CC module/bdev/raid/bdev_raid_sb.o 00:10:25.982 CC module/bdev/raid/raid0.o 00:10:25.982 CC module/bdev/iscsi/bdev_iscsi.o 00:10:26.241 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:26.241 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:26.241 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:26.241 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:26.241 LIB libspdk_bdev_aio.a 00:10:26.241 SO libspdk_bdev_aio.so.6.0 00:10:26.241 CC module/bdev/raid/raid1.o 00:10:26.500 CC module/bdev/raid/concat.o 00:10:26.500 LIB libspdk_bdev_zone_block.a 00:10:26.500 SYMLINK libspdk_bdev_aio.so 00:10:26.500 SO libspdk_bdev_zone_block.so.6.0 00:10:26.500 CC module/bdev/raid/raid5f.o 00:10:26.500 LIB libspdk_bdev_ftl.a 00:10:26.500 SYMLINK libspdk_bdev_zone_block.so 00:10:26.500 LIB libspdk_bdev_iscsi.a 00:10:26.500 SO libspdk_bdev_ftl.so.6.0 00:10:26.500 SO libspdk_bdev_iscsi.so.6.0 00:10:26.500 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:26.500 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:26.500 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:26.500 SYMLINK libspdk_bdev_ftl.so 00:10:26.500 SYMLINK libspdk_bdev_iscsi.so 00:10:27.067 LIB libspdk_bdev_raid.a 00:10:27.067 SO libspdk_bdev_raid.so.6.0 00:10:27.067 LIB libspdk_bdev_virtio.a 00:10:27.326 SYMLINK libspdk_bdev_raid.so 00:10:27.326 SO libspdk_bdev_virtio.so.6.0 00:10:27.326 SYMLINK libspdk_bdev_virtio.so 00:10:28.262 LIB libspdk_bdev_nvme.a 00:10:28.521 SO libspdk_bdev_nvme.so.7.1 00:10:28.521 SYMLINK libspdk_bdev_nvme.so 00:10:29.087 CC module/event/subsystems/iobuf/iobuf.o 00:10:29.087 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:29.087 CC module/event/subsystems/vmd/vmd.o 00:10:29.087 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:29.087 CC module/event/subsystems/keyring/keyring.o 00:10:29.087 CC module/event/subsystems/sock/sock.o 00:10:29.087 CC module/event/subsystems/scheduler/scheduler.o 00:10:29.087 CC module/event/subsystems/fsdev/fsdev.o 00:10:29.087 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:29.347 LIB libspdk_event_sock.a 00:10:29.347 LIB libspdk_event_keyring.a 00:10:29.347 LIB libspdk_event_scheduler.a 00:10:29.347 LIB libspdk_event_fsdev.a 00:10:29.347 LIB libspdk_event_vmd.a 00:10:29.347 LIB libspdk_event_iobuf.a 00:10:29.347 LIB libspdk_event_vhost_blk.a 00:10:29.347 SO libspdk_event_sock.so.5.0 00:10:29.347 SO libspdk_event_scheduler.so.4.0 00:10:29.347 SO libspdk_event_keyring.so.1.0 00:10:29.347 SO libspdk_event_fsdev.so.1.0 00:10:29.347 SO libspdk_event_vhost_blk.so.3.0 00:10:29.347 SO libspdk_event_iobuf.so.3.0 00:10:29.347 SO libspdk_event_vmd.so.6.0 00:10:29.347 SYMLINK libspdk_event_keyring.so 00:10:29.347 SYMLINK libspdk_event_scheduler.so 00:10:29.347 SYMLINK libspdk_event_fsdev.so 00:10:29.347 SYMLINK libspdk_event_sock.so 00:10:29.347 SYMLINK libspdk_event_vhost_blk.so 00:10:29.347 SYMLINK libspdk_event_vmd.so 00:10:29.347 SYMLINK libspdk_event_iobuf.so 00:10:29.914 CC module/event/subsystems/accel/accel.o 00:10:29.914 LIB libspdk_event_accel.a 00:10:29.914 SO libspdk_event_accel.so.6.0 00:10:29.914 SYMLINK libspdk_event_accel.so 00:10:30.480 CC module/event/subsystems/bdev/bdev.o 00:10:30.738 LIB libspdk_event_bdev.a 00:10:30.738 SO libspdk_event_bdev.so.6.0 00:10:30.738 SYMLINK libspdk_event_bdev.so 00:10:30.996 CC module/event/subsystems/nbd/nbd.o 00:10:30.996 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:30.996 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:30.996 CC module/event/subsystems/ublk/ublk.o 00:10:30.996 CC module/event/subsystems/scsi/scsi.o 00:10:31.254 LIB libspdk_event_nbd.a 00:10:31.254 SO libspdk_event_nbd.so.6.0 00:10:31.254 LIB libspdk_event_scsi.a 00:10:31.254 LIB libspdk_event_ublk.a 00:10:31.254 SO libspdk_event_scsi.so.6.0 00:10:31.254 SO libspdk_event_ublk.so.3.0 00:10:31.254 SYMLINK libspdk_event_nbd.so 00:10:31.254 LIB libspdk_event_nvmf.a 00:10:31.254 SYMLINK libspdk_event_scsi.so 00:10:31.254 SYMLINK libspdk_event_ublk.so 00:10:31.512 SO libspdk_event_nvmf.so.6.0 00:10:31.512 SYMLINK libspdk_event_nvmf.so 00:10:31.771 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:31.771 CC module/event/subsystems/iscsi/iscsi.o 00:10:31.771 LIB libspdk_event_iscsi.a 00:10:31.771 LIB libspdk_event_vhost_scsi.a 00:10:32.037 SO libspdk_event_vhost_scsi.so.3.0 00:10:32.037 SO libspdk_event_iscsi.so.6.0 00:10:32.037 SYMLINK libspdk_event_iscsi.so 00:10:32.037 SYMLINK libspdk_event_vhost_scsi.so 00:10:32.349 SO libspdk.so.6.0 00:10:32.349 SYMLINK libspdk.so 00:10:32.627 CXX app/trace/trace.o 00:10:32.627 CC app/trace_record/trace_record.o 00:10:32.627 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:32.627 CC app/nvmf_tgt/nvmf_main.o 00:10:32.627 CC app/iscsi_tgt/iscsi_tgt.o 00:10:32.627 CC app/spdk_tgt/spdk_tgt.o 00:10:32.627 CC test/thread/poller_perf/poller_perf.o 00:10:32.627 CC examples/util/zipf/zipf.o 00:10:32.627 CC examples/ioat/perf/perf.o 00:10:32.627 CC test/dma/test_dma/test_dma.o 00:10:32.627 LINK nvmf_tgt 00:10:32.627 LINK interrupt_tgt 00:10:32.885 LINK poller_perf 00:10:32.885 LINK iscsi_tgt 00:10:32.885 LINK spdk_tgt 00:10:32.885 LINK zipf 00:10:32.885 LINK spdk_trace_record 00:10:32.885 LINK ioat_perf 00:10:32.885 LINK spdk_trace 00:10:33.144 CC app/spdk_lspci/spdk_lspci.o 00:10:33.144 TEST_HEADER include/spdk/accel.h 00:10:33.144 CC app/spdk_nvme_perf/perf.o 00:10:33.144 TEST_HEADER include/spdk/accel_module.h 00:10:33.144 TEST_HEADER include/spdk/assert.h 00:10:33.144 TEST_HEADER include/spdk/barrier.h 00:10:33.144 TEST_HEADER include/spdk/base64.h 00:10:33.144 TEST_HEADER include/spdk/bdev.h 00:10:33.144 TEST_HEADER include/spdk/bdev_module.h 00:10:33.144 TEST_HEADER include/spdk/bdev_zone.h 00:10:33.144 TEST_HEADER include/spdk/bit_array.h 00:10:33.144 TEST_HEADER include/spdk/bit_pool.h 00:10:33.144 CC app/spdk_nvme_identify/identify.o 00:10:33.145 TEST_HEADER include/spdk/blob_bdev.h 00:10:33.145 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:33.145 TEST_HEADER include/spdk/blobfs.h 00:10:33.145 TEST_HEADER include/spdk/blob.h 00:10:33.145 TEST_HEADER include/spdk/conf.h 00:10:33.145 TEST_HEADER include/spdk/config.h 00:10:33.145 TEST_HEADER include/spdk/cpuset.h 00:10:33.145 CC examples/ioat/verify/verify.o 00:10:33.145 TEST_HEADER include/spdk/crc16.h 00:10:33.145 TEST_HEADER include/spdk/crc32.h 00:10:33.145 TEST_HEADER include/spdk/crc64.h 00:10:33.145 TEST_HEADER include/spdk/dif.h 00:10:33.145 CC app/spdk_nvme_discover/discovery_aer.o 00:10:33.145 TEST_HEADER include/spdk/dma.h 00:10:33.145 TEST_HEADER include/spdk/endian.h 00:10:33.145 TEST_HEADER include/spdk/env_dpdk.h 00:10:33.145 TEST_HEADER include/spdk/env.h 00:10:33.145 TEST_HEADER include/spdk/event.h 00:10:33.145 TEST_HEADER include/spdk/fd_group.h 00:10:33.145 TEST_HEADER include/spdk/fd.h 00:10:33.145 TEST_HEADER include/spdk/file.h 00:10:33.145 TEST_HEADER include/spdk/fsdev.h 00:10:33.145 TEST_HEADER include/spdk/fsdev_module.h 00:10:33.145 TEST_HEADER include/spdk/ftl.h 00:10:33.145 TEST_HEADER include/spdk/fuse_dispatcher.h 00:10:33.145 TEST_HEADER include/spdk/gpt_spec.h 00:10:33.145 TEST_HEADER include/spdk/hexlify.h 00:10:33.145 TEST_HEADER include/spdk/histogram_data.h 00:10:33.145 TEST_HEADER include/spdk/idxd.h 00:10:33.145 TEST_HEADER include/spdk/idxd_spec.h 00:10:33.145 TEST_HEADER include/spdk/init.h 00:10:33.145 TEST_HEADER include/spdk/ioat.h 00:10:33.145 TEST_HEADER include/spdk/ioat_spec.h 00:10:33.145 CC test/app/bdev_svc/bdev_svc.o 00:10:33.145 TEST_HEADER include/spdk/iscsi_spec.h 00:10:33.145 TEST_HEADER include/spdk/json.h 00:10:33.145 TEST_HEADER include/spdk/jsonrpc.h 00:10:33.145 TEST_HEADER include/spdk/keyring.h 00:10:33.145 TEST_HEADER include/spdk/keyring_module.h 00:10:33.145 TEST_HEADER include/spdk/likely.h 00:10:33.145 TEST_HEADER include/spdk/log.h 00:10:33.145 TEST_HEADER include/spdk/lvol.h 00:10:33.145 TEST_HEADER include/spdk/md5.h 00:10:33.145 TEST_HEADER include/spdk/memory.h 00:10:33.145 TEST_HEADER include/spdk/mmio.h 00:10:33.145 TEST_HEADER include/spdk/nbd.h 00:10:33.145 TEST_HEADER include/spdk/net.h 00:10:33.145 TEST_HEADER include/spdk/notify.h 00:10:33.145 LINK spdk_lspci 00:10:33.145 TEST_HEADER include/spdk/nvme.h 00:10:33.145 TEST_HEADER include/spdk/nvme_intel.h 00:10:33.145 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:33.145 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:33.145 TEST_HEADER include/spdk/nvme_spec.h 00:10:33.145 TEST_HEADER include/spdk/nvme_zns.h 00:10:33.145 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:33.145 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:33.145 LINK test_dma 00:10:33.145 TEST_HEADER include/spdk/nvmf.h 00:10:33.145 TEST_HEADER include/spdk/nvmf_spec.h 00:10:33.145 TEST_HEADER include/spdk/nvmf_transport.h 00:10:33.145 TEST_HEADER include/spdk/opal.h 00:10:33.145 TEST_HEADER include/spdk/opal_spec.h 00:10:33.145 CC app/spdk_top/spdk_top.o 00:10:33.145 TEST_HEADER include/spdk/pci_ids.h 00:10:33.145 TEST_HEADER include/spdk/pipe.h 00:10:33.145 TEST_HEADER include/spdk/queue.h 00:10:33.145 TEST_HEADER include/spdk/reduce.h 00:10:33.145 TEST_HEADER include/spdk/rpc.h 00:10:33.145 TEST_HEADER include/spdk/scheduler.h 00:10:33.145 TEST_HEADER include/spdk/scsi.h 00:10:33.145 TEST_HEADER include/spdk/scsi_spec.h 00:10:33.145 TEST_HEADER include/spdk/sock.h 00:10:33.145 TEST_HEADER include/spdk/stdinc.h 00:10:33.145 TEST_HEADER include/spdk/string.h 00:10:33.145 TEST_HEADER include/spdk/thread.h 00:10:33.145 TEST_HEADER include/spdk/trace.h 00:10:33.145 TEST_HEADER include/spdk/trace_parser.h 00:10:33.145 CC examples/thread/thread/thread_ex.o 00:10:33.145 TEST_HEADER include/spdk/tree.h 00:10:33.145 TEST_HEADER include/spdk/ublk.h 00:10:33.145 TEST_HEADER include/spdk/util.h 00:10:33.145 TEST_HEADER include/spdk/uuid.h 00:10:33.145 TEST_HEADER include/spdk/version.h 00:10:33.145 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:33.145 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:33.145 TEST_HEADER include/spdk/vhost.h 00:10:33.145 TEST_HEADER include/spdk/vmd.h 00:10:33.145 TEST_HEADER include/spdk/xor.h 00:10:33.145 TEST_HEADER include/spdk/zipf.h 00:10:33.145 CXX test/cpp_headers/accel.o 00:10:33.404 LINK bdev_svc 00:10:33.404 LINK verify 00:10:33.404 LINK spdk_nvme_discover 00:10:33.404 CXX test/cpp_headers/accel_module.o 00:10:33.404 LINK thread 00:10:33.663 CC app/vhost/vhost.o 00:10:33.663 CC test/env/vtophys/vtophys.o 00:10:33.663 CC test/env/mem_callbacks/mem_callbacks.o 00:10:33.663 CXX test/cpp_headers/assert.o 00:10:33.663 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:33.663 LINK vhost 00:10:33.663 LINK vtophys 00:10:33.663 CC examples/sock/hello_world/hello_sock.o 00:10:33.922 CXX test/cpp_headers/barrier.o 00:10:33.922 CXX test/cpp_headers/base64.o 00:10:33.922 CC test/event/event_perf/event_perf.o 00:10:33.922 LINK spdk_nvme_perf 00:10:33.922 LINK hello_sock 00:10:34.181 CC test/rpc_client/rpc_client_test.o 00:10:34.181 LINK event_perf 00:10:34.181 CXX test/cpp_headers/bdev.o 00:10:34.181 LINK mem_callbacks 00:10:34.181 LINK spdk_nvme_identify 00:10:34.181 CC test/nvme/aer/aer.o 00:10:34.181 LINK nvme_fuzz 00:10:34.181 LINK spdk_top 00:10:34.181 LINK rpc_client_test 00:10:34.181 CC test/event/reactor/reactor.o 00:10:34.181 CXX test/cpp_headers/bdev_module.o 00:10:34.181 CC test/nvme/reset/reset.o 00:10:34.439 CC examples/vmd/lsvmd/lsvmd.o 00:10:34.439 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:34.439 CC test/nvme/sgl/sgl.o 00:10:34.439 LINK reactor 00:10:34.439 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:34.439 LINK aer 00:10:34.439 LINK lsvmd 00:10:34.439 CC app/spdk_dd/spdk_dd.o 00:10:34.439 CXX test/cpp_headers/bdev_zone.o 00:10:34.439 LINK env_dpdk_post_init 00:10:34.697 LINK reset 00:10:34.697 CC app/fio/nvme/fio_plugin.o 00:10:34.698 CC test/event/reactor_perf/reactor_perf.o 00:10:34.698 CXX test/cpp_headers/bit_array.o 00:10:34.698 LINK sgl 00:10:34.957 CC examples/vmd/led/led.o 00:10:34.957 CC test/env/memory/memory_ut.o 00:10:34.957 LINK reactor_perf 00:10:34.957 CC test/accel/dif/dif.o 00:10:34.957 CXX test/cpp_headers/bit_pool.o 00:10:34.957 LINK spdk_dd 00:10:34.957 CC test/blobfs/mkfs/mkfs.o 00:10:34.957 CC test/nvme/e2edp/nvme_dp.o 00:10:34.957 LINK led 00:10:34.957 CXX test/cpp_headers/blob_bdev.o 00:10:35.216 CC test/event/app_repeat/app_repeat.o 00:10:35.216 LINK mkfs 00:10:35.216 CC test/env/pci/pci_ut.o 00:10:35.216 LINK spdk_nvme 00:10:35.216 CXX test/cpp_headers/blobfs_bdev.o 00:10:35.216 LINK app_repeat 00:10:35.216 LINK nvme_dp 00:10:35.475 CC examples/idxd/perf/perf.o 00:10:35.475 CXX test/cpp_headers/blobfs.o 00:10:35.475 CC app/fio/bdev/fio_plugin.o 00:10:35.475 CC test/nvme/overhead/overhead.o 00:10:35.475 CC test/event/scheduler/scheduler.o 00:10:35.475 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:35.733 CXX test/cpp_headers/blob.o 00:10:35.733 LINK pci_ut 00:10:35.733 LINK dif 00:10:35.733 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:35.733 LINK idxd_perf 00:10:35.733 CXX test/cpp_headers/conf.o 00:10:35.733 LINK overhead 00:10:35.734 LINK scheduler 00:10:35.992 CXX test/cpp_headers/config.o 00:10:35.992 CXX test/cpp_headers/cpuset.o 00:10:35.992 LINK spdk_bdev 00:10:35.992 LINK memory_ut 00:10:35.992 CXX test/cpp_headers/crc16.o 00:10:35.992 CC test/nvme/err_injection/err_injection.o 00:10:35.992 CC test/nvme/startup/startup.o 00:10:35.992 CC examples/fsdev/hello_world/hello_fsdev.o 00:10:35.992 CC examples/accel/perf/accel_perf.o 00:10:36.251 CC examples/blob/hello_world/hello_blob.o 00:10:36.251 CC test/nvme/reserve/reserve.o 00:10:36.251 LINK vhost_fuzz 00:10:36.251 CXX test/cpp_headers/crc32.o 00:10:36.251 LINK err_injection 00:10:36.251 CC test/nvme/simple_copy/simple_copy.o 00:10:36.251 LINK startup 00:10:36.251 LINK hello_fsdev 00:10:36.508 LINK iscsi_fuzz 00:10:36.508 LINK hello_blob 00:10:36.508 CXX test/cpp_headers/crc64.o 00:10:36.508 LINK reserve 00:10:36.508 CC test/nvme/connect_stress/connect_stress.o 00:10:36.508 LINK simple_copy 00:10:36.508 CXX test/cpp_headers/dif.o 00:10:36.774 LINK accel_perf 00:10:36.774 LINK connect_stress 00:10:36.774 CC test/nvme/boot_partition/boot_partition.o 00:10:36.774 CC test/nvme/compliance/nvme_compliance.o 00:10:36.774 CC test/app/histogram_perf/histogram_perf.o 00:10:36.774 CC test/bdev/bdevio/bdevio.o 00:10:36.774 CC examples/blob/cli/blobcli.o 00:10:36.774 CXX test/cpp_headers/dma.o 00:10:36.774 CC test/lvol/esnap/esnap.o 00:10:36.774 CC test/app/jsoncat/jsoncat.o 00:10:36.774 CXX test/cpp_headers/endian.o 00:10:36.774 LINK histogram_perf 00:10:36.774 LINK boot_partition 00:10:37.033 CC test/nvme/fused_ordering/fused_ordering.o 00:10:37.033 LINK jsoncat 00:10:37.033 CXX test/cpp_headers/env_dpdk.o 00:10:37.033 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:37.033 CXX test/cpp_headers/env.o 00:10:37.033 CXX test/cpp_headers/event.o 00:10:37.033 LINK nvme_compliance 00:10:37.033 LINK fused_ordering 00:10:37.033 LINK bdevio 00:10:37.292 CXX test/cpp_headers/fd_group.o 00:10:37.292 LINK doorbell_aers 00:10:37.292 CC test/app/stub/stub.o 00:10:37.292 CXX test/cpp_headers/fd.o 00:10:37.292 CC test/nvme/fdp/fdp.o 00:10:37.292 LINK blobcli 00:10:37.292 CC test/nvme/cuse/cuse.o 00:10:37.292 CXX test/cpp_headers/file.o 00:10:37.292 CXX test/cpp_headers/fsdev.o 00:10:37.550 LINK stub 00:10:37.550 CXX test/cpp_headers/fsdev_module.o 00:10:37.550 CC examples/nvme/hello_world/hello_world.o 00:10:37.550 CC examples/bdev/hello_world/hello_bdev.o 00:10:37.550 CXX test/cpp_headers/fuse_dispatcher.o 00:10:37.550 CXX test/cpp_headers/ftl.o 00:10:37.550 CC examples/bdev/bdevperf/bdevperf.o 00:10:37.550 LINK fdp 00:10:37.808 CC examples/nvme/reconnect/reconnect.o 00:10:37.808 LINK hello_world 00:10:37.808 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:37.808 LINK hello_bdev 00:10:37.808 CXX test/cpp_headers/gpt_spec.o 00:10:37.808 CC examples/nvme/arbitration/arbitration.o 00:10:38.066 CC examples/nvme/hotplug/hotplug.o 00:10:38.066 CXX test/cpp_headers/hexlify.o 00:10:38.066 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:38.066 CXX test/cpp_headers/histogram_data.o 00:10:38.066 LINK reconnect 00:10:38.066 CXX test/cpp_headers/idxd.o 00:10:38.324 LINK hotplug 00:10:38.324 LINK arbitration 00:10:38.324 LINK cmb_copy 00:10:38.324 CC examples/nvme/abort/abort.o 00:10:38.324 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:38.324 CXX test/cpp_headers/idxd_spec.o 00:10:38.324 LINK nvme_manage 00:10:38.324 CXX test/cpp_headers/init.o 00:10:38.324 CXX test/cpp_headers/ioat.o 00:10:38.324 CXX test/cpp_headers/ioat_spec.o 00:10:38.583 LINK pmr_persistence 00:10:38.583 CXX test/cpp_headers/iscsi_spec.o 00:10:38.583 CXX test/cpp_headers/json.o 00:10:38.583 CXX test/cpp_headers/jsonrpc.o 00:10:38.583 CXX test/cpp_headers/keyring.o 00:10:38.583 LINK bdevperf 00:10:38.583 CXX test/cpp_headers/keyring_module.o 00:10:38.583 CXX test/cpp_headers/likely.o 00:10:38.583 CXX test/cpp_headers/log.o 00:10:38.583 CXX test/cpp_headers/lvol.o 00:10:38.839 LINK abort 00:10:38.839 CXX test/cpp_headers/md5.o 00:10:38.839 CXX test/cpp_headers/memory.o 00:10:38.839 LINK cuse 00:10:38.839 CXX test/cpp_headers/mmio.o 00:10:38.839 CXX test/cpp_headers/nbd.o 00:10:38.839 CXX test/cpp_headers/net.o 00:10:38.839 CXX test/cpp_headers/notify.o 00:10:38.839 CXX test/cpp_headers/nvme.o 00:10:38.839 CXX test/cpp_headers/nvme_intel.o 00:10:38.839 CXX test/cpp_headers/nvme_ocssd.o 00:10:38.839 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:39.099 CXX test/cpp_headers/nvme_spec.o 00:10:39.099 CXX test/cpp_headers/nvme_zns.o 00:10:39.099 CXX test/cpp_headers/nvmf_cmd.o 00:10:39.099 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:39.099 CXX test/cpp_headers/nvmf.o 00:10:39.099 CXX test/cpp_headers/nvmf_spec.o 00:10:39.099 CC examples/nvmf/nvmf/nvmf.o 00:10:39.099 CXX test/cpp_headers/nvmf_transport.o 00:10:39.099 CXX test/cpp_headers/opal.o 00:10:39.099 CXX test/cpp_headers/opal_spec.o 00:10:39.099 CXX test/cpp_headers/pci_ids.o 00:10:39.099 CXX test/cpp_headers/pipe.o 00:10:39.099 CXX test/cpp_headers/queue.o 00:10:39.099 CXX test/cpp_headers/reduce.o 00:10:39.358 CXX test/cpp_headers/rpc.o 00:10:39.358 CXX test/cpp_headers/scheduler.o 00:10:39.358 CXX test/cpp_headers/scsi.o 00:10:39.358 CXX test/cpp_headers/scsi_spec.o 00:10:39.358 CXX test/cpp_headers/sock.o 00:10:39.358 CXX test/cpp_headers/stdinc.o 00:10:39.358 CXX test/cpp_headers/string.o 00:10:39.358 CXX test/cpp_headers/thread.o 00:10:39.358 CXX test/cpp_headers/trace.o 00:10:39.358 LINK nvmf 00:10:39.358 CXX test/cpp_headers/trace_parser.o 00:10:39.358 CXX test/cpp_headers/tree.o 00:10:39.616 CXX test/cpp_headers/ublk.o 00:10:39.616 CXX test/cpp_headers/util.o 00:10:39.616 CXX test/cpp_headers/uuid.o 00:10:39.616 CXX test/cpp_headers/version.o 00:10:39.616 CXX test/cpp_headers/vfio_user_pci.o 00:10:39.616 CXX test/cpp_headers/vfio_user_spec.o 00:10:39.616 CXX test/cpp_headers/vhost.o 00:10:39.616 CXX test/cpp_headers/vmd.o 00:10:39.616 CXX test/cpp_headers/xor.o 00:10:39.616 CXX test/cpp_headers/zipf.o 00:10:42.895 LINK esnap 00:10:43.462 00:10:43.462 real 1m25.507s 00:10:43.462 user 7m13.319s 00:10:43.462 sys 1m50.067s 00:10:43.462 ************************************ 00:10:43.462 END TEST make 00:10:43.462 ************************************ 00:10:43.462 09:03:42 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:10:43.462 09:03:42 make -- common/autotest_common.sh@10 -- $ set +x 00:10:43.462 09:03:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:43.463 09:03:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:43.463 09:03:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:43.463 09:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:43.463 09:03:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:43.463 09:03:42 -- pm/common@44 -- $ pid=5249 00:10:43.463 09:03:42 -- pm/common@50 -- $ kill -TERM 5249 00:10:43.463 09:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:43.463 09:03:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:43.463 09:03:42 -- pm/common@44 -- $ pid=5251 00:10:43.463 09:03:42 -- pm/common@50 -- $ kill -TERM 5251 00:10:43.463 09:03:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:10:43.463 09:03:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:43.463 09:03:42 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:43.463 09:03:42 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:43.463 09:03:42 -- common/autotest_common.sh@1691 -- # lcov --version 00:10:43.722 09:03:42 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:43.722 09:03:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.722 09:03:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.722 09:03:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.722 09:03:42 -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.722 09:03:42 -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.722 09:03:42 -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.722 09:03:42 -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.722 09:03:42 -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.722 09:03:42 -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.722 09:03:42 -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.722 09:03:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.722 09:03:42 -- scripts/common.sh@344 -- # case "$op" in 00:10:43.722 09:03:42 -- scripts/common.sh@345 -- # : 1 00:10:43.722 09:03:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.722 09:03:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.722 09:03:42 -- scripts/common.sh@365 -- # decimal 1 00:10:43.722 09:03:42 -- scripts/common.sh@353 -- # local d=1 00:10:43.722 09:03:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.722 09:03:42 -- scripts/common.sh@355 -- # echo 1 00:10:43.722 09:03:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.722 09:03:42 -- scripts/common.sh@366 -- # decimal 2 00:10:43.722 09:03:42 -- scripts/common.sh@353 -- # local d=2 00:10:43.722 09:03:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.722 09:03:42 -- scripts/common.sh@355 -- # echo 2 00:10:43.722 09:03:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.722 09:03:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.722 09:03:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.722 09:03:42 -- scripts/common.sh@368 -- # return 0 00:10:43.722 09:03:42 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.722 09:03:42 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.722 --rc genhtml_branch_coverage=1 00:10:43.722 --rc genhtml_function_coverage=1 00:10:43.722 --rc genhtml_legend=1 00:10:43.722 --rc geninfo_all_blocks=1 00:10:43.722 --rc geninfo_unexecuted_blocks=1 00:10:43.722 00:10:43.722 ' 00:10:43.722 09:03:42 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.722 --rc genhtml_branch_coverage=1 00:10:43.722 --rc genhtml_function_coverage=1 00:10:43.722 --rc genhtml_legend=1 00:10:43.722 --rc geninfo_all_blocks=1 00:10:43.722 --rc geninfo_unexecuted_blocks=1 00:10:43.722 00:10:43.722 ' 00:10:43.722 09:03:42 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.722 --rc genhtml_branch_coverage=1 00:10:43.722 --rc genhtml_function_coverage=1 00:10:43.722 --rc genhtml_legend=1 00:10:43.722 --rc geninfo_all_blocks=1 00:10:43.722 --rc geninfo_unexecuted_blocks=1 00:10:43.722 00:10:43.722 ' 00:10:43.722 09:03:42 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.722 --rc genhtml_branch_coverage=1 00:10:43.722 --rc genhtml_function_coverage=1 00:10:43.722 --rc genhtml_legend=1 00:10:43.722 --rc geninfo_all_blocks=1 00:10:43.722 --rc geninfo_unexecuted_blocks=1 00:10:43.722 00:10:43.722 ' 00:10:43.722 09:03:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.722 09:03:42 -- nvmf/common.sh@7 -- # uname -s 00:10:43.722 09:03:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.722 09:03:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.722 09:03:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.722 09:03:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.722 09:03:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.722 09:03:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.722 09:03:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.722 09:03:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.722 09:03:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.722 09:03:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.722 09:03:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c18da4e-01f5-448a-ac6a-0f8254a46070 00:10:43.722 09:03:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=3c18da4e-01f5-448a-ac6a-0f8254a46070 00:10:43.722 09:03:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.722 09:03:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.722 09:03:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:43.722 09:03:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.722 09:03:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.722 09:03:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.722 09:03:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.722 09:03:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.722 09:03:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.722 09:03:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.722 09:03:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.722 09:03:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.722 09:03:42 -- paths/export.sh@5 -- # export PATH 00:10:43.722 09:03:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.722 09:03:42 -- nvmf/common.sh@51 -- # : 0 00:10:43.722 09:03:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.722 09:03:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.722 09:03:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.722 09:03:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.722 09:03:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.722 09:03:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.722 09:03:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.722 09:03:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.722 09:03:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.722 09:03:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:43.722 09:03:42 -- spdk/autotest.sh@32 -- # uname -s 00:10:43.722 09:03:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:43.722 09:03:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:43.722 09:03:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:43.722 09:03:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:43.722 09:03:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:43.722 09:03:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:43.722 09:03:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:43.722 09:03:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:43.723 09:03:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54216 00:10:43.723 09:03:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:43.723 09:03:42 -- pm/common@17 -- # local monitor 00:10:43.723 09:03:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:43.723 09:03:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:43.723 09:03:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:43.723 09:03:42 -- pm/common@25 -- # sleep 1 00:10:43.723 09:03:42 -- pm/common@21 -- # date +%s 00:10:43.723 09:03:42 -- pm/common@21 -- # date +%s 00:10:43.723 09:03:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730883822 00:10:43.723 09:03:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730883822 00:10:43.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730883822_collect-cpu-load.pm.log 00:10:43.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730883822_collect-vmstat.pm.log 00:10:44.658 09:03:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:44.658 09:03:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:44.658 09:03:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:44.658 09:03:43 -- common/autotest_common.sh@10 -- # set +x 00:10:44.658 09:03:43 -- spdk/autotest.sh@59 -- # create_test_list 00:10:44.658 09:03:43 -- common/autotest_common.sh@750 -- # xtrace_disable 00:10:44.658 09:03:43 -- common/autotest_common.sh@10 -- # set +x 00:10:44.915 09:03:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:44.915 09:03:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:44.915 09:03:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:44.915 09:03:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:44.915 09:03:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:44.915 09:03:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:44.915 09:03:43 -- common/autotest_common.sh@1455 -- # uname 00:10:44.915 09:03:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:10:44.915 09:03:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:44.915 09:03:43 -- common/autotest_common.sh@1475 -- # uname 00:10:44.915 09:03:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:10:44.915 09:03:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:10:44.915 09:03:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:10:44.915 lcov: LCOV version 1.15 00:10:44.915 09:03:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:59.787 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:59.787 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:17.855 09:04:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:11:17.855 09:04:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.855 09:04:14 -- common/autotest_common.sh@10 -- # set +x 00:11:17.855 09:04:14 -- spdk/autotest.sh@78 -- # rm -f 00:11:17.855 09:04:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.855 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:17.855 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:17.855 09:04:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:11:17.855 09:04:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:11:17.855 09:04:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:11:17.855 09:04:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:11:17.855 09:04:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:17.855 09:04:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:11:17.855 09:04:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:11:17.855 09:04:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:17.855 09:04:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:11:17.855 09:04:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:11:17.855 09:04:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:17.855 09:04:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:11:17.855 09:04:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:11:17.855 09:04:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:17.855 09:04:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:11:17.855 09:04:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:11:17.855 09:04:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:17.855 09:04:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:17.855 09:04:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:11:17.855 09:04:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:17.855 09:04:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:17.855 09:04:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:11:17.855 09:04:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:17.855 09:04:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:17.855 No valid GPT data, bailing 00:11:17.855 09:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:17.855 09:04:15 -- scripts/common.sh@394 -- # pt= 00:11:17.855 09:04:15 -- scripts/common.sh@395 -- # return 1 00:11:17.855 09:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:17.855 1+0 records in 00:11:17.855 1+0 records out 00:11:17.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526792 s, 199 MB/s 00:11:17.855 09:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:17.855 09:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:17.855 09:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:11:17.855 09:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:11:17.856 09:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:17.856 No valid GPT data, bailing 00:11:17.856 09:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:17.856 09:04:15 -- scripts/common.sh@394 -- # pt= 00:11:17.856 09:04:15 -- scripts/common.sh@395 -- # return 1 00:11:17.856 09:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:17.856 1+0 records in 00:11:17.856 1+0 records out 00:11:17.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619362 s, 169 MB/s 00:11:17.856 09:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:17.856 09:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:17.856 09:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:11:17.856 09:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:11:17.856 09:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:17.856 No valid GPT data, bailing 00:11:17.856 09:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:17.856 09:04:15 -- scripts/common.sh@394 -- # pt= 00:11:17.856 09:04:15 -- scripts/common.sh@395 -- # return 1 00:11:17.856 09:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:17.856 1+0 records in 00:11:17.856 1+0 records out 00:11:17.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562612 s, 186 MB/s 00:11:17.856 09:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:17.856 09:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:17.856 09:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:11:17.856 09:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:11:17.856 09:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:17.856 No valid GPT data, bailing 00:11:17.856 09:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:17.856 09:04:15 -- scripts/common.sh@394 -- # pt= 00:11:17.856 09:04:15 -- scripts/common.sh@395 -- # return 1 00:11:17.856 09:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:17.856 1+0 records in 00:11:17.856 1+0 records out 00:11:17.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057141 s, 184 MB/s 00:11:17.856 09:04:15 -- spdk/autotest.sh@105 -- # sync 00:11:17.856 09:04:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:17.856 09:04:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:17.856 09:04:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:19.231 09:04:18 -- spdk/autotest.sh@111 -- # uname -s 00:11:19.231 09:04:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:11:19.231 09:04:18 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:11:19.231 09:04:18 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:20.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:20.165 Hugepages 00:11:20.165 node hugesize free / total 00:11:20.165 node0 1048576kB 0 / 0 00:11:20.165 node0 2048kB 0 / 0 00:11:20.165 00:11:20.165 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:20.165 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:20.165 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:20.423 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:11:20.423 09:04:19 -- spdk/autotest.sh@117 -- # uname -s 00:11:20.423 09:04:19 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:11:20.423 09:04:19 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:11:20.423 09:04:19 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:21.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:21.404 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.404 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.404 09:04:20 -- common/autotest_common.sh@1515 -- # sleep 1 00:11:22.341 09:04:21 -- common/autotest_common.sh@1516 -- # bdfs=() 00:11:22.341 09:04:21 -- common/autotest_common.sh@1516 -- # local bdfs 00:11:22.341 09:04:21 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:11:22.341 09:04:21 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:11:22.341 09:04:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:22.341 09:04:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:11:22.341 09:04:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:22.341 09:04:21 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:22.341 09:04:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:22.600 09:04:21 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:11:22.600 09:04:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:22.600 09:04:21 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:22.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:23.117 Waiting for block devices as requested 00:11:23.117 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.117 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.375 09:04:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:11:23.375 09:04:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:11:23.375 09:04:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:11:23.375 09:04:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:11:23.375 09:04:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:11:23.375 09:04:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1541 -- # continue 00:11:23.375 09:04:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:11:23.375 09:04:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:23.375 09:04:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:11:23.375 09:04:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:11:23.375 09:04:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:11:23.375 09:04:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:11:23.375 09:04:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:11:23.375 09:04:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:11:23.375 09:04:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:11:23.375 09:04:22 -- common/autotest_common.sh@1541 -- # continue 00:11:23.375 09:04:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:11:23.375 09:04:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.375 09:04:22 -- common/autotest_common.sh@10 -- # set +x 00:11:23.375 09:04:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:11:23.375 09:04:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.375 09:04:22 -- common/autotest_common.sh@10 -- # set +x 00:11:23.375 09:04:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.313 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.576 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.576 09:04:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:11:24.576 09:04:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.576 09:04:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.576 09:04:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:11:24.576 09:04:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:11:24.576 09:04:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:11:24.576 09:04:23 -- common/autotest_common.sh@1561 -- # bdfs=() 00:11:24.576 09:04:23 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:11:24.576 09:04:23 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:11:24.576 09:04:23 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:11:24.576 09:04:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:11:24.576 09:04:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:24.576 09:04:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:11:24.576 09:04:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:24.576 09:04:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:24.576 09:04:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:24.834 09:04:23 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:11:24.834 09:04:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:24.834 09:04:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:11:24.834 09:04:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:24.834 09:04:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:11:24.834 09:04:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:24.834 09:04:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:11:24.834 09:04:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:24.834 09:04:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:11:24.834 09:04:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:24.834 09:04:23 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:11:24.834 09:04:23 -- common/autotest_common.sh@1570 -- # return 0 00:11:24.834 09:04:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:11:24.834 09:04:23 -- common/autotest_common.sh@1578 -- # return 0 00:11:24.834 09:04:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:11:24.834 09:04:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:11:24.834 09:04:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:24.834 09:04:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:24.834 09:04:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:11:24.834 09:04:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.834 09:04:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.834 09:04:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:11:24.834 09:04:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:24.834 09:04:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:24.834 09:04:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:24.834 09:04:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.834 ************************************ 00:11:24.834 START TEST env 00:11:24.834 ************************************ 00:11:24.834 09:04:23 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:24.834 * Looking for test storage... 00:11:24.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:24.834 09:04:23 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:24.834 09:04:23 env -- common/autotest_common.sh@1691 -- # lcov --version 00:11:24.834 09:04:23 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:25.093 09:04:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.093 09:04:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.093 09:04:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.093 09:04:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.093 09:04:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.093 09:04:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.093 09:04:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.093 09:04:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.093 09:04:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.093 09:04:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.093 09:04:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.093 09:04:23 env -- scripts/common.sh@344 -- # case "$op" in 00:11:25.093 09:04:23 env -- scripts/common.sh@345 -- # : 1 00:11:25.093 09:04:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.093 09:04:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.093 09:04:23 env -- scripts/common.sh@365 -- # decimal 1 00:11:25.093 09:04:23 env -- scripts/common.sh@353 -- # local d=1 00:11:25.093 09:04:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.093 09:04:23 env -- scripts/common.sh@355 -- # echo 1 00:11:25.093 09:04:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.093 09:04:23 env -- scripts/common.sh@366 -- # decimal 2 00:11:25.093 09:04:23 env -- scripts/common.sh@353 -- # local d=2 00:11:25.093 09:04:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.093 09:04:23 env -- scripts/common.sh@355 -- # echo 2 00:11:25.093 09:04:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.093 09:04:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.093 09:04:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.093 09:04:23 env -- scripts/common.sh@368 -- # return 0 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.093 --rc genhtml_branch_coverage=1 00:11:25.093 --rc genhtml_function_coverage=1 00:11:25.093 --rc genhtml_legend=1 00:11:25.093 --rc geninfo_all_blocks=1 00:11:25.093 --rc geninfo_unexecuted_blocks=1 00:11:25.093 00:11:25.093 ' 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.093 --rc genhtml_branch_coverage=1 00:11:25.093 --rc genhtml_function_coverage=1 00:11:25.093 --rc genhtml_legend=1 00:11:25.093 --rc geninfo_all_blocks=1 00:11:25.093 --rc geninfo_unexecuted_blocks=1 00:11:25.093 00:11:25.093 ' 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.093 --rc genhtml_branch_coverage=1 00:11:25.093 --rc genhtml_function_coverage=1 00:11:25.093 --rc genhtml_legend=1 00:11:25.093 --rc geninfo_all_blocks=1 00:11:25.093 --rc geninfo_unexecuted_blocks=1 00:11:25.093 00:11:25.093 ' 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.093 --rc genhtml_branch_coverage=1 00:11:25.093 --rc genhtml_function_coverage=1 00:11:25.093 --rc genhtml_legend=1 00:11:25.093 --rc geninfo_all_blocks=1 00:11:25.093 --rc geninfo_unexecuted_blocks=1 00:11:25.093 00:11:25.093 ' 00:11:25.093 09:04:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:25.093 09:04:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.093 09:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:11:25.093 ************************************ 00:11:25.093 START TEST env_memory 00:11:25.093 ************************************ 00:11:25.093 09:04:23 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:25.093 00:11:25.093 00:11:25.093 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.093 http://cunit.sourceforge.net/ 00:11:25.093 00:11:25.093 00:11:25.093 Suite: memory 00:11:25.093 Test: alloc and free memory map ...[2024-11-06 09:04:23.998477] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:25.093 passed 00:11:25.093 Test: mem map translation ...[2024-11-06 09:04:24.043220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:25.093 [2024-11-06 09:04:24.043286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:25.094 [2024-11-06 09:04:24.043356] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:25.094 [2024-11-06 09:04:24.043381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:25.094 passed 00:11:25.094 Test: mem map registration ...[2024-11-06 09:04:24.111269] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:11:25.094 [2024-11-06 09:04:24.111332] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:11:25.352 passed 00:11:25.352 Test: mem map adjacent registrations ...passed 00:11:25.352 00:11:25.352 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.352 suites 1 1 n/a 0 0 00:11:25.352 tests 4 4 4 0 0 00:11:25.352 asserts 152 152 152 0 n/a 00:11:25.352 00:11:25.352 Elapsed time = 0.242 seconds 00:11:25.352 00:11:25.352 real 0m0.294s 00:11:25.352 user 0m0.246s 00:11:25.352 sys 0m0.039s 00:11:25.352 09:04:24 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.352 09:04:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:25.352 ************************************ 00:11:25.352 END TEST env_memory 00:11:25.352 ************************************ 00:11:25.352 09:04:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:25.352 09:04:24 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:25.352 09:04:24 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.352 09:04:24 env -- common/autotest_common.sh@10 -- # set +x 00:11:25.352 ************************************ 00:11:25.352 START TEST env_vtophys 00:11:25.352 ************************************ 00:11:25.352 09:04:24 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:25.352 EAL: lib.eal log level changed from notice to debug 00:11:25.352 EAL: Detected lcore 0 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 1 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 2 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 3 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 4 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 5 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 6 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 7 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 8 as core 0 on socket 0 00:11:25.352 EAL: Detected lcore 9 as core 0 on socket 0 00:11:25.352 EAL: Maximum logical cores by configuration: 128 00:11:25.352 EAL: Detected CPU lcores: 10 00:11:25.352 EAL: Detected NUMA nodes: 1 00:11:25.352 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:11:25.352 EAL: Detected shared linkage of DPDK 00:11:25.352 EAL: No shared files mode enabled, IPC will be disabled 00:11:25.352 EAL: Selected IOVA mode 'PA' 00:11:25.352 EAL: Probing VFIO support... 00:11:25.352 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:25.352 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:25.352 EAL: Ask a virtual area of 0x2e000 bytes 00:11:25.352 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:25.352 EAL: Setting up physically contiguous memory... 00:11:25.353 EAL: Setting maximum number of open files to 524288 00:11:25.353 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:25.353 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:25.353 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.353 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:25.353 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.353 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.353 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:25.353 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:25.353 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.612 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:25.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.612 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.612 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:25.612 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:25.612 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.612 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:25.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.612 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.612 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:25.612 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:25.612 EAL: Ask a virtual area of 0x61000 bytes 00:11:25.612 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:25.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:25.612 EAL: Ask a virtual area of 0x400000000 bytes 00:11:25.612 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:25.612 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:25.612 EAL: Hugepages will be freed exactly as allocated. 00:11:25.612 EAL: No shared files mode enabled, IPC is disabled 00:11:25.612 EAL: No shared files mode enabled, IPC is disabled 00:11:25.612 EAL: TSC frequency is ~2490000 KHz 00:11:25.612 EAL: Main lcore 0 is ready (tid=7efef340da40;cpuset=[0]) 00:11:25.612 EAL: Trying to obtain current memory policy. 00:11:25.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:25.612 EAL: Restoring previous memory policy: 0 00:11:25.612 EAL: request: mp_malloc_sync 00:11:25.612 EAL: No shared files mode enabled, IPC is disabled 00:11:25.612 EAL: Heap on socket 0 was expanded by 2MB 00:11:25.612 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:25.612 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:25.612 EAL: Mem event callback 'spdk:(nil)' registered 00:11:25.612 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:25.612 00:11:25.612 00:11:25.612 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.612 http://cunit.sourceforge.net/ 00:11:25.612 00:11:25.612 00:11:25.612 Suite: components_suite 00:11:26.179 Test: vtophys_malloc_test ...passed 00:11:26.179 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:26.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.179 EAL: Restoring previous memory policy: 4 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was expanded by 4MB 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was shrunk by 4MB 00:11:26.179 EAL: Trying to obtain current memory policy. 00:11:26.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.179 EAL: Restoring previous memory policy: 4 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was expanded by 6MB 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was shrunk by 6MB 00:11:26.179 EAL: Trying to obtain current memory policy. 00:11:26.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.179 EAL: Restoring previous memory policy: 4 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was expanded by 10MB 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was shrunk by 10MB 00:11:26.179 EAL: Trying to obtain current memory policy. 00:11:26.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.179 EAL: Restoring previous memory policy: 4 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was expanded by 18MB 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was shrunk by 18MB 00:11:26.179 EAL: Trying to obtain current memory policy. 00:11:26.179 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.179 EAL: Restoring previous memory policy: 4 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was expanded by 34MB 00:11:26.179 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.179 EAL: request: mp_malloc_sync 00:11:26.179 EAL: No shared files mode enabled, IPC is disabled 00:11:26.179 EAL: Heap on socket 0 was shrunk by 34MB 00:11:26.438 EAL: Trying to obtain current memory policy. 00:11:26.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.438 EAL: Restoring previous memory policy: 4 00:11:26.438 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.438 EAL: request: mp_malloc_sync 00:11:26.438 EAL: No shared files mode enabled, IPC is disabled 00:11:26.438 EAL: Heap on socket 0 was expanded by 66MB 00:11:26.438 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.438 EAL: request: mp_malloc_sync 00:11:26.438 EAL: No shared files mode enabled, IPC is disabled 00:11:26.438 EAL: Heap on socket 0 was shrunk by 66MB 00:11:26.438 EAL: Trying to obtain current memory policy. 00:11:26.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:26.697 EAL: Restoring previous memory policy: 4 00:11:26.697 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.697 EAL: request: mp_malloc_sync 00:11:26.697 EAL: No shared files mode enabled, IPC is disabled 00:11:26.697 EAL: Heap on socket 0 was expanded by 130MB 00:11:26.697 EAL: Calling mem event callback 'spdk:(nil)' 00:11:26.955 EAL: request: mp_malloc_sync 00:11:26.955 EAL: No shared files mode enabled, IPC is disabled 00:11:26.955 EAL: Heap on socket 0 was shrunk by 130MB 00:11:26.955 EAL: Trying to obtain current memory policy. 00:11:26.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:27.215 EAL: Restoring previous memory policy: 4 00:11:27.215 EAL: Calling mem event callback 'spdk:(nil)' 00:11:27.215 EAL: request: mp_malloc_sync 00:11:27.215 EAL: No shared files mode enabled, IPC is disabled 00:11:27.215 EAL: Heap on socket 0 was expanded by 258MB 00:11:27.473 EAL: Calling mem event callback 'spdk:(nil)' 00:11:27.473 EAL: request: mp_malloc_sync 00:11:27.473 EAL: No shared files mode enabled, IPC is disabled 00:11:27.473 EAL: Heap on socket 0 was shrunk by 258MB 00:11:28.060 EAL: Trying to obtain current memory policy. 00:11:28.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.060 EAL: Restoring previous memory policy: 4 00:11:28.060 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.060 EAL: request: mp_malloc_sync 00:11:28.060 EAL: No shared files mode enabled, IPC is disabled 00:11:28.060 EAL: Heap on socket 0 was expanded by 514MB 00:11:28.996 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.996 EAL: request: mp_malloc_sync 00:11:28.996 EAL: No shared files mode enabled, IPC is disabled 00:11:28.996 EAL: Heap on socket 0 was shrunk by 514MB 00:11:29.967 EAL: Trying to obtain current memory policy. 00:11:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:30.227 EAL: Restoring previous memory policy: 4 00:11:30.227 EAL: Calling mem event callback 'spdk:(nil)' 00:11:30.227 EAL: request: mp_malloc_sync 00:11:30.227 EAL: No shared files mode enabled, IPC is disabled 00:11:30.227 EAL: Heap on socket 0 was expanded by 1026MB 00:11:32.133 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.133 EAL: request: mp_malloc_sync 00:11:32.133 EAL: No shared files mode enabled, IPC is disabled 00:11:32.133 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:34.034 passed 00:11:34.034 00:11:34.034 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.034 suites 1 1 n/a 0 0 00:11:34.034 tests 2 2 2 0 0 00:11:34.034 asserts 5663 5663 5663 0 n/a 00:11:34.034 00:11:34.034 Elapsed time = 8.171 seconds 00:11:34.034 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.034 EAL: request: mp_malloc_sync 00:11:34.034 EAL: No shared files mode enabled, IPC is disabled 00:11:34.034 EAL: Heap on socket 0 was shrunk by 2MB 00:11:34.034 EAL: No shared files mode enabled, IPC is disabled 00:11:34.034 EAL: No shared files mode enabled, IPC is disabled 00:11:34.034 EAL: No shared files mode enabled, IPC is disabled 00:11:34.034 00:11:34.034 real 0m8.506s 00:11:34.034 user 0m7.480s 00:11:34.034 sys 0m0.873s 00:11:34.034 09:04:32 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.034 09:04:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 ************************************ 00:11:34.034 END TEST env_vtophys 00:11:34.034 ************************************ 00:11:34.034 09:04:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:34.034 09:04:32 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.034 09:04:32 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.034 09:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 ************************************ 00:11:34.034 START TEST env_pci 00:11:34.034 ************************************ 00:11:34.034 09:04:32 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:34.034 00:11:34.034 00:11:34.034 CUnit - A unit testing framework for C - Version 2.1-3 00:11:34.034 http://cunit.sourceforge.net/ 00:11:34.034 00:11:34.034 00:11:34.034 Suite: pci 00:11:34.034 Test: pci_hook ...[2024-11-06 09:04:32.919959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56552 has claimed it 00:11:34.034 passed 00:11:34.034 00:11:34.034 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.034 suites 1 1 n/a 0 0 00:11:34.034 tests 1 1 1 0 0 00:11:34.034 asserts 25 25 25 0 n/a 00:11:34.034 00:11:34.034 Elapsed time = 0.007 secondsEAL: Cannot find device (10000:00:01.0) 00:11:34.034 EAL: Failed to attach device on primary process 00:11:34.034 00:11:34.034 00:11:34.034 real 0m0.105s 00:11:34.034 user 0m0.048s 00:11:34.034 sys 0m0.056s 00:11:34.034 09:04:32 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.034 09:04:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 ************************************ 00:11:34.034 END TEST env_pci 00:11:34.034 ************************************ 00:11:34.034 09:04:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:34.034 09:04:33 env -- env/env.sh@15 -- # uname 00:11:34.034 09:04:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:34.034 09:04:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:34.034 09:04:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:34.034 09:04:33 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:34.034 09:04:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.034 09:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 ************************************ 00:11:34.034 START TEST env_dpdk_post_init 00:11:34.034 ************************************ 00:11:34.034 09:04:33 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:34.293 EAL: Detected CPU lcores: 10 00:11:34.293 EAL: Detected NUMA nodes: 1 00:11:34.293 EAL: Detected shared linkage of DPDK 00:11:34.293 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:34.293 EAL: Selected IOVA mode 'PA' 00:11:34.293 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:34.293 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:34.293 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:34.551 Starting DPDK initialization... 00:11:34.551 Starting SPDK post initialization... 00:11:34.551 SPDK NVMe probe 00:11:34.551 Attaching to 0000:00:10.0 00:11:34.551 Attaching to 0000:00:11.0 00:11:34.551 Attached to 0000:00:10.0 00:11:34.551 Attached to 0000:00:11.0 00:11:34.551 Cleaning up... 00:11:34.551 00:11:34.551 real 0m0.299s 00:11:34.551 user 0m0.099s 00:11:34.551 sys 0m0.100s 00:11:34.551 09:04:33 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.551 09:04:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:34.551 ************************************ 00:11:34.551 END TEST env_dpdk_post_init 00:11:34.551 ************************************ 00:11:34.551 09:04:33 env -- env/env.sh@26 -- # uname 00:11:34.551 09:04:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:34.551 09:04:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:34.551 09:04:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.551 09:04:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.551 09:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:11:34.551 ************************************ 00:11:34.551 START TEST env_mem_callbacks 00:11:34.551 ************************************ 00:11:34.552 09:04:33 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:34.552 EAL: Detected CPU lcores: 10 00:11:34.552 EAL: Detected NUMA nodes: 1 00:11:34.552 EAL: Detected shared linkage of DPDK 00:11:34.552 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:34.552 EAL: Selected IOVA mode 'PA' 00:11:34.810 00:11:34.810 00:11:34.810 CUnit - A unit testing framework for C - Version 2.1-3 00:11:34.810 http://cunit.sourceforge.net/ 00:11:34.810 00:11:34.810 00:11:34.810 Suite: memory 00:11:34.810 Test: test ... 00:11:34.810 register 0x200000200000 2097152 00:11:34.810 malloc 3145728 00:11:34.810 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:34.810 register 0x200000400000 4194304 00:11:34.810 buf 0x2000004fffc0 len 3145728 PASSED 00:11:34.810 malloc 64 00:11:34.810 buf 0x2000004ffec0 len 64 PASSED 00:11:34.810 malloc 4194304 00:11:34.810 register 0x200000800000 6291456 00:11:34.810 buf 0x2000009fffc0 len 4194304 PASSED 00:11:34.810 free 0x2000004fffc0 3145728 00:11:34.811 free 0x2000004ffec0 64 00:11:34.811 unregister 0x200000400000 4194304 PASSED 00:11:34.811 free 0x2000009fffc0 4194304 00:11:34.811 unregister 0x200000800000 6291456 PASSED 00:11:34.811 malloc 8388608 00:11:34.811 register 0x200000400000 10485760 00:11:34.811 buf 0x2000005fffc0 len 8388608 PASSED 00:11:34.811 free 0x2000005fffc0 8388608 00:11:34.811 unregister 0x200000400000 10485760 PASSED 00:11:34.811 passed 00:11:34.811 00:11:34.811 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.811 suites 1 1 n/a 0 0 00:11:34.811 tests 1 1 1 0 0 00:11:34.811 asserts 15 15 15 0 n/a 00:11:34.811 00:11:34.811 Elapsed time = 0.082 seconds 00:11:34.811 00:11:34.811 real 0m0.294s 00:11:34.811 user 0m0.119s 00:11:34.811 sys 0m0.074s 00:11:34.811 09:04:33 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.811 09:04:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:34.811 ************************************ 00:11:34.811 END TEST env_mem_callbacks 00:11:34.811 ************************************ 00:11:34.811 00:11:34.811 real 0m10.085s 00:11:34.811 user 0m8.257s 00:11:34.811 sys 0m1.480s 00:11:34.811 09:04:33 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.811 09:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:11:34.811 ************************************ 00:11:34.811 END TEST env 00:11:34.811 ************************************ 00:11:34.811 09:04:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:34.811 09:04:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.811 09:04:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.811 09:04:33 -- common/autotest_common.sh@10 -- # set +x 00:11:34.811 ************************************ 00:11:34.811 START TEST rpc 00:11:34.811 ************************************ 00:11:34.811 09:04:33 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:35.069 * Looking for test storage... 00:11:35.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:35.069 09:04:33 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.069 09:04:33 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.069 09:04:33 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.069 09:04:34 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.069 09:04:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.069 09:04:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.069 09:04:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.069 09:04:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.069 09:04:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.069 09:04:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.069 09:04:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.069 09:04:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.069 09:04:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.069 09:04:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:35.069 09:04:34 rpc -- scripts/common.sh@345 -- # : 1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.069 09:04:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.069 09:04:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@353 -- # local d=1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.069 09:04:34 rpc -- scripts/common.sh@355 -- # echo 1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.069 09:04:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:11:35.069 09:04:34 rpc -- scripts/common.sh@353 -- # local d=2 00:11:35.070 09:04:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.070 09:04:34 rpc -- scripts/common.sh@355 -- # echo 2 00:11:35.070 09:04:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.070 09:04:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.070 09:04:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.070 09:04:34 rpc -- scripts/common.sh@368 -- # return 0 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.070 --rc genhtml_branch_coverage=1 00:11:35.070 --rc genhtml_function_coverage=1 00:11:35.070 --rc genhtml_legend=1 00:11:35.070 --rc geninfo_all_blocks=1 00:11:35.070 --rc geninfo_unexecuted_blocks=1 00:11:35.070 00:11:35.070 ' 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.070 --rc genhtml_branch_coverage=1 00:11:35.070 --rc genhtml_function_coverage=1 00:11:35.070 --rc genhtml_legend=1 00:11:35.070 --rc geninfo_all_blocks=1 00:11:35.070 --rc geninfo_unexecuted_blocks=1 00:11:35.070 00:11:35.070 ' 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.070 --rc genhtml_branch_coverage=1 00:11:35.070 --rc genhtml_function_coverage=1 00:11:35.070 --rc genhtml_legend=1 00:11:35.070 --rc geninfo_all_blocks=1 00:11:35.070 --rc geninfo_unexecuted_blocks=1 00:11:35.070 00:11:35.070 ' 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.070 --rc genhtml_branch_coverage=1 00:11:35.070 --rc genhtml_function_coverage=1 00:11:35.070 --rc genhtml_legend=1 00:11:35.070 --rc geninfo_all_blocks=1 00:11:35.070 --rc geninfo_unexecuted_blocks=1 00:11:35.070 00:11:35.070 ' 00:11:35.070 09:04:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56679 00:11:35.070 09:04:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:35.070 09:04:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:35.070 09:04:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56679 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@833 -- # '[' -z 56679 ']' 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.070 09:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.327 [2024-11-06 09:04:34.193339] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:11:35.327 [2024-11-06 09:04:34.193472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56679 ] 00:11:35.586 [2024-11-06 09:04:34.373856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.586 [2024-11-06 09:04:34.492889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:35.586 [2024-11-06 09:04:34.492960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56679' to capture a snapshot of events at runtime. 00:11:35.586 [2024-11-06 09:04:34.492973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.586 [2024-11-06 09:04:34.492988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.586 [2024-11-06 09:04:34.492998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56679 for offline analysis/debug. 00:11:35.586 [2024-11-06 09:04:34.494245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.539 09:04:35 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.539 09:04:35 rpc -- common/autotest_common.sh@866 -- # return 0 00:11:36.539 09:04:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:36.539 09:04:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:36.539 09:04:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:36.539 09:04:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:36.539 09:04:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:36.539 09:04:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.539 09:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.539 ************************************ 00:11:36.539 START TEST rpc_integrity 00:11:36.539 ************************************ 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.539 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.539 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:36.539 { 00:11:36.539 "name": "Malloc0", 00:11:36.539 "aliases": [ 00:11:36.539 "bfca924c-f046-4acf-89c8-45a37c0a0f37" 00:11:36.539 ], 00:11:36.539 "product_name": "Malloc disk", 00:11:36.539 "block_size": 512, 00:11:36.539 "num_blocks": 16384, 00:11:36.539 "uuid": "bfca924c-f046-4acf-89c8-45a37c0a0f37", 00:11:36.540 "assigned_rate_limits": { 00:11:36.540 "rw_ios_per_sec": 0, 00:11:36.540 "rw_mbytes_per_sec": 0, 00:11:36.540 "r_mbytes_per_sec": 0, 00:11:36.540 "w_mbytes_per_sec": 0 00:11:36.540 }, 00:11:36.540 "claimed": false, 00:11:36.540 "zoned": false, 00:11:36.540 "supported_io_types": { 00:11:36.540 "read": true, 00:11:36.540 "write": true, 00:11:36.540 "unmap": true, 00:11:36.540 "flush": true, 00:11:36.540 "reset": true, 00:11:36.540 "nvme_admin": false, 00:11:36.540 "nvme_io": false, 00:11:36.540 "nvme_io_md": false, 00:11:36.540 "write_zeroes": true, 00:11:36.540 "zcopy": true, 00:11:36.540 "get_zone_info": false, 00:11:36.540 "zone_management": false, 00:11:36.540 "zone_append": false, 00:11:36.540 "compare": false, 00:11:36.540 "compare_and_write": false, 00:11:36.540 "abort": true, 00:11:36.540 "seek_hole": false, 00:11:36.540 "seek_data": false, 00:11:36.540 "copy": true, 00:11:36.540 "nvme_iov_md": false 00:11:36.540 }, 00:11:36.540 "memory_domains": [ 00:11:36.540 { 00:11:36.540 "dma_device_id": "system", 00:11:36.540 "dma_device_type": 1 00:11:36.540 }, 00:11:36.540 { 00:11:36.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.540 "dma_device_type": 2 00:11:36.540 } 00:11:36.540 ], 00:11:36.540 "driver_specific": {} 00:11:36.540 } 00:11:36.540 ]' 00:11:36.540 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:36.540 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:36.540 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:36.540 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.540 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 [2024-11-06 09:04:35.558247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:36.540 [2024-11-06 09:04:35.558326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.540 [2024-11-06 09:04:35.558357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:36.540 [2024-11-06 09:04:35.558377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.540 [2024-11-06 09:04:35.561039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.540 [2024-11-06 09:04:35.561086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:36.540 Passthru0 00:11:36.540 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.540 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:36.540 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.540 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:36.799 { 00:11:36.799 "name": "Malloc0", 00:11:36.799 "aliases": [ 00:11:36.799 "bfca924c-f046-4acf-89c8-45a37c0a0f37" 00:11:36.799 ], 00:11:36.799 "product_name": "Malloc disk", 00:11:36.799 "block_size": 512, 00:11:36.799 "num_blocks": 16384, 00:11:36.799 "uuid": "bfca924c-f046-4acf-89c8-45a37c0a0f37", 00:11:36.799 "assigned_rate_limits": { 00:11:36.799 "rw_ios_per_sec": 0, 00:11:36.799 "rw_mbytes_per_sec": 0, 00:11:36.799 "r_mbytes_per_sec": 0, 00:11:36.799 "w_mbytes_per_sec": 0 00:11:36.799 }, 00:11:36.799 "claimed": true, 00:11:36.799 "claim_type": "exclusive_write", 00:11:36.799 "zoned": false, 00:11:36.799 "supported_io_types": { 00:11:36.799 "read": true, 00:11:36.799 "write": true, 00:11:36.799 "unmap": true, 00:11:36.799 "flush": true, 00:11:36.799 "reset": true, 00:11:36.799 "nvme_admin": false, 00:11:36.799 "nvme_io": false, 00:11:36.799 "nvme_io_md": false, 00:11:36.799 "write_zeroes": true, 00:11:36.799 "zcopy": true, 00:11:36.799 "get_zone_info": false, 00:11:36.799 "zone_management": false, 00:11:36.799 "zone_append": false, 00:11:36.799 "compare": false, 00:11:36.799 "compare_and_write": false, 00:11:36.799 "abort": true, 00:11:36.799 "seek_hole": false, 00:11:36.799 "seek_data": false, 00:11:36.799 "copy": true, 00:11:36.799 "nvme_iov_md": false 00:11:36.799 }, 00:11:36.799 "memory_domains": [ 00:11:36.799 { 00:11:36.799 "dma_device_id": "system", 00:11:36.799 "dma_device_type": 1 00:11:36.799 }, 00:11:36.799 { 00:11:36.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.799 "dma_device_type": 2 00:11:36.799 } 00:11:36.799 ], 00:11:36.799 "driver_specific": {} 00:11:36.799 }, 00:11:36.799 { 00:11:36.799 "name": "Passthru0", 00:11:36.799 "aliases": [ 00:11:36.799 "5e19926a-631b-58e6-9983-802295a37a32" 00:11:36.799 ], 00:11:36.799 "product_name": "passthru", 00:11:36.799 "block_size": 512, 00:11:36.799 "num_blocks": 16384, 00:11:36.799 "uuid": "5e19926a-631b-58e6-9983-802295a37a32", 00:11:36.799 "assigned_rate_limits": { 00:11:36.799 "rw_ios_per_sec": 0, 00:11:36.799 "rw_mbytes_per_sec": 0, 00:11:36.799 "r_mbytes_per_sec": 0, 00:11:36.799 "w_mbytes_per_sec": 0 00:11:36.799 }, 00:11:36.799 "claimed": false, 00:11:36.799 "zoned": false, 00:11:36.799 "supported_io_types": { 00:11:36.799 "read": true, 00:11:36.799 "write": true, 00:11:36.799 "unmap": true, 00:11:36.799 "flush": true, 00:11:36.799 "reset": true, 00:11:36.799 "nvme_admin": false, 00:11:36.799 "nvme_io": false, 00:11:36.799 "nvme_io_md": false, 00:11:36.799 "write_zeroes": true, 00:11:36.799 "zcopy": true, 00:11:36.799 "get_zone_info": false, 00:11:36.799 "zone_management": false, 00:11:36.799 "zone_append": false, 00:11:36.799 "compare": false, 00:11:36.799 "compare_and_write": false, 00:11:36.799 "abort": true, 00:11:36.799 "seek_hole": false, 00:11:36.799 "seek_data": false, 00:11:36.799 "copy": true, 00:11:36.799 "nvme_iov_md": false 00:11:36.799 }, 00:11:36.799 "memory_domains": [ 00:11:36.799 { 00:11:36.799 "dma_device_id": "system", 00:11:36.799 "dma_device_type": 1 00:11:36.799 }, 00:11:36.799 { 00:11:36.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.799 "dma_device_type": 2 00:11:36.799 } 00:11:36.799 ], 00:11:36.799 "driver_specific": { 00:11:36.799 "passthru": { 00:11:36.799 "name": "Passthru0", 00:11:36.799 "base_bdev_name": "Malloc0" 00:11:36.799 } 00:11:36.799 } 00:11:36.799 } 00:11:36.799 ]' 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:36.799 09:04:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:36.799 00:11:36.799 real 0m0.341s 00:11:36.799 user 0m0.175s 00:11:36.799 sys 0m0.066s 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.799 ************************************ 00:11:36.799 09:04:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 END TEST rpc_integrity 00:11:36.799 ************************************ 00:11:36.799 09:04:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:36.799 09:04:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:36.799 09:04:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.799 09:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 ************************************ 00:11:36.799 START TEST rpc_plugins 00:11:36.799 ************************************ 00:11:36.799 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:11:36.799 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:36.799 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.799 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.799 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:36.799 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:36.799 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.799 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.058 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:37.058 { 00:11:37.058 "name": "Malloc1", 00:11:37.058 "aliases": [ 00:11:37.058 "d51efb94-1083-4d66-9b6b-33cdc2d16c90" 00:11:37.058 ], 00:11:37.058 "product_name": "Malloc disk", 00:11:37.058 "block_size": 4096, 00:11:37.059 "num_blocks": 256, 00:11:37.059 "uuid": "d51efb94-1083-4d66-9b6b-33cdc2d16c90", 00:11:37.059 "assigned_rate_limits": { 00:11:37.059 "rw_ios_per_sec": 0, 00:11:37.059 "rw_mbytes_per_sec": 0, 00:11:37.059 "r_mbytes_per_sec": 0, 00:11:37.059 "w_mbytes_per_sec": 0 00:11:37.059 }, 00:11:37.059 "claimed": false, 00:11:37.059 "zoned": false, 00:11:37.059 "supported_io_types": { 00:11:37.059 "read": true, 00:11:37.059 "write": true, 00:11:37.059 "unmap": true, 00:11:37.059 "flush": true, 00:11:37.059 "reset": true, 00:11:37.059 "nvme_admin": false, 00:11:37.059 "nvme_io": false, 00:11:37.059 "nvme_io_md": false, 00:11:37.059 "write_zeroes": true, 00:11:37.059 "zcopy": true, 00:11:37.059 "get_zone_info": false, 00:11:37.059 "zone_management": false, 00:11:37.059 "zone_append": false, 00:11:37.059 "compare": false, 00:11:37.059 "compare_and_write": false, 00:11:37.059 "abort": true, 00:11:37.059 "seek_hole": false, 00:11:37.059 "seek_data": false, 00:11:37.059 "copy": true, 00:11:37.059 "nvme_iov_md": false 00:11:37.059 }, 00:11:37.059 "memory_domains": [ 00:11:37.059 { 00:11:37.059 "dma_device_id": "system", 00:11:37.059 "dma_device_type": 1 00:11:37.059 }, 00:11:37.059 { 00:11:37.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.059 "dma_device_type": 2 00:11:37.059 } 00:11:37.059 ], 00:11:37.059 "driver_specific": {} 00:11:37.059 } 00:11:37.059 ]' 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:37.059 09:04:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:37.059 00:11:37.059 real 0m0.160s 00:11:37.059 user 0m0.090s 00:11:37.059 sys 0m0.030s 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.059 09:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:37.059 ************************************ 00:11:37.059 END TEST rpc_plugins 00:11:37.059 ************************************ 00:11:37.059 09:04:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:37.059 09:04:36 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:37.059 09:04:36 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.059 09:04:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.059 ************************************ 00:11:37.059 START TEST rpc_trace_cmd_test 00:11:37.059 ************************************ 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:37.059 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56679", 00:11:37.059 "tpoint_group_mask": "0x8", 00:11:37.059 "iscsi_conn": { 00:11:37.059 "mask": "0x2", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "scsi": { 00:11:37.059 "mask": "0x4", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "bdev": { 00:11:37.059 "mask": "0x8", 00:11:37.059 "tpoint_mask": "0xffffffffffffffff" 00:11:37.059 }, 00:11:37.059 "nvmf_rdma": { 00:11:37.059 "mask": "0x10", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "nvmf_tcp": { 00:11:37.059 "mask": "0x20", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "ftl": { 00:11:37.059 "mask": "0x40", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "blobfs": { 00:11:37.059 "mask": "0x80", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "dsa": { 00:11:37.059 "mask": "0x200", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "thread": { 00:11:37.059 "mask": "0x400", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "nvme_pcie": { 00:11:37.059 "mask": "0x800", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "iaa": { 00:11:37.059 "mask": "0x1000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "nvme_tcp": { 00:11:37.059 "mask": "0x2000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "bdev_nvme": { 00:11:37.059 "mask": "0x4000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "sock": { 00:11:37.059 "mask": "0x8000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "blob": { 00:11:37.059 "mask": "0x10000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "bdev_raid": { 00:11:37.059 "mask": "0x20000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 }, 00:11:37.059 "scheduler": { 00:11:37.059 "mask": "0x40000", 00:11:37.059 "tpoint_mask": "0x0" 00:11:37.059 } 00:11:37.059 }' 00:11:37.059 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:37.318 00:11:37.318 real 0m0.218s 00:11:37.318 user 0m0.166s 00:11:37.318 sys 0m0.040s 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.318 09:04:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.318 ************************************ 00:11:37.318 END TEST rpc_trace_cmd_test 00:11:37.318 ************************************ 00:11:37.318 09:04:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:37.318 09:04:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:37.318 09:04:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:37.318 09:04:36 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:37.318 09:04:36 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.318 09:04:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.318 ************************************ 00:11:37.318 START TEST rpc_daemon_integrity 00:11:37.318 ************************************ 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:37.318 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.577 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:37.577 { 00:11:37.577 "name": "Malloc2", 00:11:37.577 "aliases": [ 00:11:37.577 "554226fe-a413-4d98-a4ec-fd9e36777638" 00:11:37.577 ], 00:11:37.577 "product_name": "Malloc disk", 00:11:37.577 "block_size": 512, 00:11:37.577 "num_blocks": 16384, 00:11:37.577 "uuid": "554226fe-a413-4d98-a4ec-fd9e36777638", 00:11:37.577 "assigned_rate_limits": { 00:11:37.577 "rw_ios_per_sec": 0, 00:11:37.577 "rw_mbytes_per_sec": 0, 00:11:37.577 "r_mbytes_per_sec": 0, 00:11:37.577 "w_mbytes_per_sec": 0 00:11:37.577 }, 00:11:37.577 "claimed": false, 00:11:37.577 "zoned": false, 00:11:37.577 "supported_io_types": { 00:11:37.578 "read": true, 00:11:37.578 "write": true, 00:11:37.578 "unmap": true, 00:11:37.578 "flush": true, 00:11:37.578 "reset": true, 00:11:37.578 "nvme_admin": false, 00:11:37.578 "nvme_io": false, 00:11:37.578 "nvme_io_md": false, 00:11:37.578 "write_zeroes": true, 00:11:37.578 "zcopy": true, 00:11:37.578 "get_zone_info": false, 00:11:37.578 "zone_management": false, 00:11:37.578 "zone_append": false, 00:11:37.578 "compare": false, 00:11:37.578 "compare_and_write": false, 00:11:37.578 "abort": true, 00:11:37.578 "seek_hole": false, 00:11:37.578 "seek_data": false, 00:11:37.578 "copy": true, 00:11:37.578 "nvme_iov_md": false 00:11:37.578 }, 00:11:37.578 "memory_domains": [ 00:11:37.578 { 00:11:37.578 "dma_device_id": "system", 00:11:37.578 "dma_device_type": 1 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.578 "dma_device_type": 2 00:11:37.578 } 00:11:37.578 ], 00:11:37.578 "driver_specific": {} 00:11:37.578 } 00:11:37.578 ]' 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 [2024-11-06 09:04:36.478580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:37.578 [2024-11-06 09:04:36.478668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.578 [2024-11-06 09:04:36.478693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:37.578 [2024-11-06 09:04:36.478709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.578 [2024-11-06 09:04:36.481498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.578 [2024-11-06 09:04:36.481542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:37.578 Passthru0 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:37.578 { 00:11:37.578 "name": "Malloc2", 00:11:37.578 "aliases": [ 00:11:37.578 "554226fe-a413-4d98-a4ec-fd9e36777638" 00:11:37.578 ], 00:11:37.578 "product_name": "Malloc disk", 00:11:37.578 "block_size": 512, 00:11:37.578 "num_blocks": 16384, 00:11:37.578 "uuid": "554226fe-a413-4d98-a4ec-fd9e36777638", 00:11:37.578 "assigned_rate_limits": { 00:11:37.578 "rw_ios_per_sec": 0, 00:11:37.578 "rw_mbytes_per_sec": 0, 00:11:37.578 "r_mbytes_per_sec": 0, 00:11:37.578 "w_mbytes_per_sec": 0 00:11:37.578 }, 00:11:37.578 "claimed": true, 00:11:37.578 "claim_type": "exclusive_write", 00:11:37.578 "zoned": false, 00:11:37.578 "supported_io_types": { 00:11:37.578 "read": true, 00:11:37.578 "write": true, 00:11:37.578 "unmap": true, 00:11:37.578 "flush": true, 00:11:37.578 "reset": true, 00:11:37.578 "nvme_admin": false, 00:11:37.578 "nvme_io": false, 00:11:37.578 "nvme_io_md": false, 00:11:37.578 "write_zeroes": true, 00:11:37.578 "zcopy": true, 00:11:37.578 "get_zone_info": false, 00:11:37.578 "zone_management": false, 00:11:37.578 "zone_append": false, 00:11:37.578 "compare": false, 00:11:37.578 "compare_and_write": false, 00:11:37.578 "abort": true, 00:11:37.578 "seek_hole": false, 00:11:37.578 "seek_data": false, 00:11:37.578 "copy": true, 00:11:37.578 "nvme_iov_md": false 00:11:37.578 }, 00:11:37.578 "memory_domains": [ 00:11:37.578 { 00:11:37.578 "dma_device_id": "system", 00:11:37.578 "dma_device_type": 1 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.578 "dma_device_type": 2 00:11:37.578 } 00:11:37.578 ], 00:11:37.578 "driver_specific": {} 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "name": "Passthru0", 00:11:37.578 "aliases": [ 00:11:37.578 "7cda11b7-b83a-5316-b152-2bdcfbc9012f" 00:11:37.578 ], 00:11:37.578 "product_name": "passthru", 00:11:37.578 "block_size": 512, 00:11:37.578 "num_blocks": 16384, 00:11:37.578 "uuid": "7cda11b7-b83a-5316-b152-2bdcfbc9012f", 00:11:37.578 "assigned_rate_limits": { 00:11:37.578 "rw_ios_per_sec": 0, 00:11:37.578 "rw_mbytes_per_sec": 0, 00:11:37.578 "r_mbytes_per_sec": 0, 00:11:37.578 "w_mbytes_per_sec": 0 00:11:37.578 }, 00:11:37.578 "claimed": false, 00:11:37.578 "zoned": false, 00:11:37.578 "supported_io_types": { 00:11:37.578 "read": true, 00:11:37.578 "write": true, 00:11:37.578 "unmap": true, 00:11:37.578 "flush": true, 00:11:37.578 "reset": true, 00:11:37.578 "nvme_admin": false, 00:11:37.578 "nvme_io": false, 00:11:37.578 "nvme_io_md": false, 00:11:37.578 "write_zeroes": true, 00:11:37.578 "zcopy": true, 00:11:37.578 "get_zone_info": false, 00:11:37.578 "zone_management": false, 00:11:37.578 "zone_append": false, 00:11:37.578 "compare": false, 00:11:37.578 "compare_and_write": false, 00:11:37.578 "abort": true, 00:11:37.578 "seek_hole": false, 00:11:37.578 "seek_data": false, 00:11:37.578 "copy": true, 00:11:37.578 "nvme_iov_md": false 00:11:37.578 }, 00:11:37.578 "memory_domains": [ 00:11:37.578 { 00:11:37.578 "dma_device_id": "system", 00:11:37.578 "dma_device_type": 1 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.578 "dma_device_type": 2 00:11:37.578 } 00:11:37.578 ], 00:11:37.578 "driver_specific": { 00:11:37.578 "passthru": { 00:11:37.578 "name": "Passthru0", 00:11:37.578 "base_bdev_name": "Malloc2" 00:11:37.578 } 00:11:37.578 } 00:11:37.578 } 00:11:37.578 ]' 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:37.578 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:37.837 09:04:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:37.837 00:11:37.837 real 0m0.348s 00:11:37.837 user 0m0.171s 00:11:37.837 sys 0m0.065s 00:11:37.837 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.837 ************************************ 00:11:37.837 09:04:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:37.837 END TEST rpc_daemon_integrity 00:11:37.837 ************************************ 00:11:37.837 09:04:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:37.837 09:04:36 rpc -- rpc/rpc.sh@84 -- # killprocess 56679 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@952 -- # '[' -z 56679 ']' 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@956 -- # kill -0 56679 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@957 -- # uname 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56679 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:37.837 killing process with pid 56679 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56679' 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@971 -- # kill 56679 00:11:37.837 09:04:36 rpc -- common/autotest_common.sh@976 -- # wait 56679 00:11:40.374 00:11:40.374 real 0m5.356s 00:11:40.375 user 0m5.773s 00:11:40.375 sys 0m1.000s 00:11:40.375 09:04:39 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.375 09:04:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.375 ************************************ 00:11:40.375 END TEST rpc 00:11:40.375 ************************************ 00:11:40.375 09:04:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:40.375 09:04:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:40.375 09:04:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.375 09:04:39 -- common/autotest_common.sh@10 -- # set +x 00:11:40.375 ************************************ 00:11:40.375 START TEST skip_rpc 00:11:40.375 ************************************ 00:11:40.375 09:04:39 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:40.375 * Looking for test storage... 00:11:40.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:40.375 09:04:39 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.375 09:04:39 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.375 09:04:39 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.634 09:04:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.634 --rc genhtml_branch_coverage=1 00:11:40.634 --rc genhtml_function_coverage=1 00:11:40.634 --rc genhtml_legend=1 00:11:40.634 --rc geninfo_all_blocks=1 00:11:40.634 --rc geninfo_unexecuted_blocks=1 00:11:40.634 00:11:40.634 ' 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.634 --rc genhtml_branch_coverage=1 00:11:40.634 --rc genhtml_function_coverage=1 00:11:40.634 --rc genhtml_legend=1 00:11:40.634 --rc geninfo_all_blocks=1 00:11:40.634 --rc geninfo_unexecuted_blocks=1 00:11:40.634 00:11:40.634 ' 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.634 --rc genhtml_branch_coverage=1 00:11:40.634 --rc genhtml_function_coverage=1 00:11:40.634 --rc genhtml_legend=1 00:11:40.634 --rc geninfo_all_blocks=1 00:11:40.634 --rc geninfo_unexecuted_blocks=1 00:11:40.634 00:11:40.634 ' 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.634 --rc genhtml_branch_coverage=1 00:11:40.634 --rc genhtml_function_coverage=1 00:11:40.634 --rc genhtml_legend=1 00:11:40.634 --rc geninfo_all_blocks=1 00:11:40.634 --rc geninfo_unexecuted_blocks=1 00:11:40.634 00:11:40.634 ' 00:11:40.634 09:04:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:40.634 09:04:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:40.634 09:04:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.634 09:04:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.634 ************************************ 00:11:40.634 START TEST skip_rpc 00:11:40.634 ************************************ 00:11:40.634 09:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:11:40.634 09:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56914 00:11:40.634 09:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:40.634 09:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:40.634 09:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:40.634 [2024-11-06 09:04:39.641808] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:11:40.634 [2024-11-06 09:04:39.641938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56914 ] 00:11:40.898 [2024-11-06 09:04:39.826419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.157 [2024-11-06 09:04:39.949193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56914 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56914 ']' 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56914 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56914 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:46.426 killing process with pid 56914 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56914' 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56914 00:11:46.426 09:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56914 00:11:48.327 00:11:48.327 real 0m7.513s 00:11:48.327 user 0m6.990s 00:11:48.327 sys 0m0.438s 00:11:48.327 09:04:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.327 09:04:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.327 ************************************ 00:11:48.327 END TEST skip_rpc 00:11:48.327 ************************************ 00:11:48.327 09:04:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:48.327 09:04:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:48.327 09:04:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.327 09:04:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.327 ************************************ 00:11:48.327 START TEST skip_rpc_with_json 00:11:48.327 ************************************ 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57018 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57018 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57018 ']' 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.327 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:48.327 [2024-11-06 09:04:47.227210] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:11:48.327 [2024-11-06 09:04:47.227358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57018 ] 00:11:48.585 [2024-11-06 09:04:47.408906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.585 [2024-11-06 09:04:47.532505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:49.520 [2024-11-06 09:04:48.407568] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:49.520 request: 00:11:49.520 { 00:11:49.520 "trtype": "tcp", 00:11:49.520 "method": "nvmf_get_transports", 00:11:49.520 "req_id": 1 00:11:49.520 } 00:11:49.520 Got JSON-RPC error response 00:11:49.520 response: 00:11:49.520 { 00:11:49.520 "code": -19, 00:11:49.520 "message": "No such device" 00:11:49.520 } 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:49.520 [2024-11-06 09:04:48.419668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.520 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:49.779 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.779 09:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:49.779 { 00:11:49.779 "subsystems": [ 00:11:49.779 { 00:11:49.779 "subsystem": "fsdev", 00:11:49.779 "config": [ 00:11:49.779 { 00:11:49.779 "method": "fsdev_set_opts", 00:11:49.779 "params": { 00:11:49.779 "fsdev_io_pool_size": 65535, 00:11:49.779 "fsdev_io_cache_size": 256 00:11:49.779 } 00:11:49.779 } 00:11:49.779 ] 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "subsystem": "keyring", 00:11:49.779 "config": [] 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "subsystem": "iobuf", 00:11:49.779 "config": [ 00:11:49.779 { 00:11:49.779 "method": "iobuf_set_options", 00:11:49.779 "params": { 00:11:49.779 "small_pool_count": 8192, 00:11:49.779 "large_pool_count": 1024, 00:11:49.779 "small_bufsize": 8192, 00:11:49.779 "large_bufsize": 135168, 00:11:49.779 "enable_numa": false 00:11:49.779 } 00:11:49.779 } 00:11:49.779 ] 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "subsystem": "sock", 00:11:49.779 "config": [ 00:11:49.779 { 00:11:49.779 "method": "sock_set_default_impl", 00:11:49.779 "params": { 00:11:49.779 "impl_name": "posix" 00:11:49.779 } 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "method": "sock_impl_set_options", 00:11:49.779 "params": { 00:11:49.779 "impl_name": "ssl", 00:11:49.779 "recv_buf_size": 4096, 00:11:49.779 "send_buf_size": 4096, 00:11:49.779 "enable_recv_pipe": true, 00:11:49.779 "enable_quickack": false, 00:11:49.779 "enable_placement_id": 0, 00:11:49.779 "enable_zerocopy_send_server": true, 00:11:49.779 "enable_zerocopy_send_client": false, 00:11:49.779 "zerocopy_threshold": 0, 00:11:49.779 "tls_version": 0, 00:11:49.779 "enable_ktls": false 00:11:49.779 } 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "method": "sock_impl_set_options", 00:11:49.779 "params": { 00:11:49.779 "impl_name": "posix", 00:11:49.779 "recv_buf_size": 2097152, 00:11:49.779 "send_buf_size": 2097152, 00:11:49.779 "enable_recv_pipe": true, 00:11:49.779 "enable_quickack": false, 00:11:49.779 "enable_placement_id": 0, 00:11:49.779 "enable_zerocopy_send_server": true, 00:11:49.779 "enable_zerocopy_send_client": false, 00:11:49.779 "zerocopy_threshold": 0, 00:11:49.779 "tls_version": 0, 00:11:49.779 "enable_ktls": false 00:11:49.779 } 00:11:49.779 } 00:11:49.779 ] 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "subsystem": "vmd", 00:11:49.779 "config": [] 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "subsystem": "accel", 00:11:49.779 "config": [ 00:11:49.779 { 00:11:49.779 "method": "accel_set_options", 00:11:49.779 "params": { 00:11:49.779 "small_cache_size": 128, 00:11:49.779 "large_cache_size": 16, 00:11:49.779 "task_count": 2048, 00:11:49.779 "sequence_count": 2048, 00:11:49.779 "buf_count": 2048 00:11:49.779 } 00:11:49.779 } 00:11:49.779 ] 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "subsystem": "bdev", 00:11:49.779 "config": [ 00:11:49.779 { 00:11:49.779 "method": "bdev_set_options", 00:11:49.779 "params": { 00:11:49.779 "bdev_io_pool_size": 65535, 00:11:49.779 "bdev_io_cache_size": 256, 00:11:49.779 "bdev_auto_examine": true, 00:11:49.779 "iobuf_small_cache_size": 128, 00:11:49.779 "iobuf_large_cache_size": 16 00:11:49.779 } 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "method": "bdev_raid_set_options", 00:11:49.779 "params": { 00:11:49.779 "process_window_size_kb": 1024, 00:11:49.779 "process_max_bandwidth_mb_sec": 0 00:11:49.779 } 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "method": "bdev_iscsi_set_options", 00:11:49.779 "params": { 00:11:49.779 "timeout_sec": 30 00:11:49.779 } 00:11:49.779 }, 00:11:49.779 { 00:11:49.779 "method": "bdev_nvme_set_options", 00:11:49.779 "params": { 00:11:49.779 "action_on_timeout": "none", 00:11:49.779 "timeout_us": 0, 00:11:49.779 "timeout_admin_us": 0, 00:11:49.779 "keep_alive_timeout_ms": 10000, 00:11:49.779 "arbitration_burst": 0, 00:11:49.779 "low_priority_weight": 0, 00:11:49.780 "medium_priority_weight": 0, 00:11:49.780 "high_priority_weight": 0, 00:11:49.780 "nvme_adminq_poll_period_us": 10000, 00:11:49.780 "nvme_ioq_poll_period_us": 0, 00:11:49.780 "io_queue_requests": 0, 00:11:49.780 "delay_cmd_submit": true, 00:11:49.780 "transport_retry_count": 4, 00:11:49.780 "bdev_retry_count": 3, 00:11:49.780 "transport_ack_timeout": 0, 00:11:49.780 "ctrlr_loss_timeout_sec": 0, 00:11:49.780 "reconnect_delay_sec": 0, 00:11:49.780 "fast_io_fail_timeout_sec": 0, 00:11:49.780 "disable_auto_failback": false, 00:11:49.780 "generate_uuids": false, 00:11:49.780 "transport_tos": 0, 00:11:49.780 "nvme_error_stat": false, 00:11:49.780 "rdma_srq_size": 0, 00:11:49.780 "io_path_stat": false, 00:11:49.780 "allow_accel_sequence": false, 00:11:49.780 "rdma_max_cq_size": 0, 00:11:49.780 "rdma_cm_event_timeout_ms": 0, 00:11:49.780 "dhchap_digests": [ 00:11:49.780 "sha256", 00:11:49.780 "sha384", 00:11:49.780 "sha512" 00:11:49.780 ], 00:11:49.780 "dhchap_dhgroups": [ 00:11:49.780 "null", 00:11:49.780 "ffdhe2048", 00:11:49.780 "ffdhe3072", 00:11:49.780 "ffdhe4096", 00:11:49.780 "ffdhe6144", 00:11:49.780 "ffdhe8192" 00:11:49.780 ] 00:11:49.780 } 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "method": "bdev_nvme_set_hotplug", 00:11:49.780 "params": { 00:11:49.780 "period_us": 100000, 00:11:49.780 "enable": false 00:11:49.780 } 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "method": "bdev_wait_for_examine" 00:11:49.780 } 00:11:49.780 ] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "scsi", 00:11:49.780 "config": null 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "scheduler", 00:11:49.780 "config": [ 00:11:49.780 { 00:11:49.780 "method": "framework_set_scheduler", 00:11:49.780 "params": { 00:11:49.780 "name": "static" 00:11:49.780 } 00:11:49.780 } 00:11:49.780 ] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "vhost_scsi", 00:11:49.780 "config": [] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "vhost_blk", 00:11:49.780 "config": [] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "ublk", 00:11:49.780 "config": [] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "nbd", 00:11:49.780 "config": [] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "nvmf", 00:11:49.780 "config": [ 00:11:49.780 { 00:11:49.780 "method": "nvmf_set_config", 00:11:49.780 "params": { 00:11:49.780 "discovery_filter": "match_any", 00:11:49.780 "admin_cmd_passthru": { 00:11:49.780 "identify_ctrlr": false 00:11:49.780 }, 00:11:49.780 "dhchap_digests": [ 00:11:49.780 "sha256", 00:11:49.780 "sha384", 00:11:49.780 "sha512" 00:11:49.780 ], 00:11:49.780 "dhchap_dhgroups": [ 00:11:49.780 "null", 00:11:49.780 "ffdhe2048", 00:11:49.780 "ffdhe3072", 00:11:49.780 "ffdhe4096", 00:11:49.780 "ffdhe6144", 00:11:49.780 "ffdhe8192" 00:11:49.780 ] 00:11:49.780 } 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "method": "nvmf_set_max_subsystems", 00:11:49.780 "params": { 00:11:49.780 "max_subsystems": 1024 00:11:49.780 } 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "method": "nvmf_set_crdt", 00:11:49.780 "params": { 00:11:49.780 "crdt1": 0, 00:11:49.780 "crdt2": 0, 00:11:49.780 "crdt3": 0 00:11:49.780 } 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "method": "nvmf_create_transport", 00:11:49.780 "params": { 00:11:49.780 "trtype": "TCP", 00:11:49.780 "max_queue_depth": 128, 00:11:49.780 "max_io_qpairs_per_ctrlr": 127, 00:11:49.780 "in_capsule_data_size": 4096, 00:11:49.780 "max_io_size": 131072, 00:11:49.780 "io_unit_size": 131072, 00:11:49.780 "max_aq_depth": 128, 00:11:49.780 "num_shared_buffers": 511, 00:11:49.780 "buf_cache_size": 4294967295, 00:11:49.780 "dif_insert_or_strip": false, 00:11:49.780 "zcopy": false, 00:11:49.780 "c2h_success": true, 00:11:49.780 "sock_priority": 0, 00:11:49.780 "abort_timeout_sec": 1, 00:11:49.780 "ack_timeout": 0, 00:11:49.780 "data_wr_pool_size": 0 00:11:49.780 } 00:11:49.780 } 00:11:49.780 ] 00:11:49.780 }, 00:11:49.780 { 00:11:49.780 "subsystem": "iscsi", 00:11:49.780 "config": [ 00:11:49.780 { 00:11:49.780 "method": "iscsi_set_options", 00:11:49.780 "params": { 00:11:49.780 "node_base": "iqn.2016-06.io.spdk", 00:11:49.780 "max_sessions": 128, 00:11:49.780 "max_connections_per_session": 2, 00:11:49.780 "max_queue_depth": 64, 00:11:49.780 "default_time2wait": 2, 00:11:49.780 "default_time2retain": 20, 00:11:49.780 "first_burst_length": 8192, 00:11:49.780 "immediate_data": true, 00:11:49.780 "allow_duplicated_isid": false, 00:11:49.780 "error_recovery_level": 0, 00:11:49.780 "nop_timeout": 60, 00:11:49.780 "nop_in_interval": 30, 00:11:49.780 "disable_chap": false, 00:11:49.780 "require_chap": false, 00:11:49.780 "mutual_chap": false, 00:11:49.780 "chap_group": 0, 00:11:49.780 "max_large_datain_per_connection": 64, 00:11:49.780 "max_r2t_per_connection": 4, 00:11:49.780 "pdu_pool_size": 36864, 00:11:49.780 "immediate_data_pool_size": 16384, 00:11:49.780 "data_out_pool_size": 2048 00:11:49.780 } 00:11:49.780 } 00:11:49.780 ] 00:11:49.780 } 00:11:49.780 ] 00:11:49.780 } 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57018 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57018 ']' 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57018 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57018 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.780 killing process with pid 57018 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57018' 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57018 00:11:49.780 09:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57018 00:11:52.309 09:04:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57074 00:11:52.309 09:04:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:52.309 09:04:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57074 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57074 ']' 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57074 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57074 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:57.578 killing process with pid 57074 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57074' 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57074 00:11:57.578 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57074 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:00.109 00:12:00.109 real 0m11.472s 00:12:00.109 user 0m10.876s 00:12:00.109 sys 0m0.934s 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:00.109 ************************************ 00:12:00.109 END TEST skip_rpc_with_json 00:12:00.109 ************************************ 00:12:00.109 09:04:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:12:00.109 09:04:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:00.109 09:04:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.109 09:04:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.109 ************************************ 00:12:00.109 START TEST skip_rpc_with_delay 00:12:00.109 ************************************ 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:00.109 [2024-11-06 09:04:58.751863] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.109 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.109 00:12:00.109 real 0m0.180s 00:12:00.109 user 0m0.096s 00:12:00.109 sys 0m0.082s 00:12:00.109 ************************************ 00:12:00.109 END TEST skip_rpc_with_delay 00:12:00.110 ************************************ 00:12:00.110 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.110 09:04:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:12:00.110 09:04:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:12:00.110 09:04:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:12:00.110 09:04:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:12:00.110 09:04:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:00.110 09:04:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.110 09:04:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.110 ************************************ 00:12:00.110 START TEST exit_on_failed_rpc_init 00:12:00.110 ************************************ 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57202 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57202 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57202 ']' 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:00.110 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:00.110 [2024-11-06 09:04:59.020376] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:00.110 [2024-11-06 09:04:59.020678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57202 ] 00:12:00.368 [2024-11-06 09:04:59.206301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.368 [2024-11-06 09:04:59.325255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.304 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:01.305 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:01.305 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:01.305 [2024-11-06 09:05:00.322497] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:01.305 [2024-11-06 09:05:00.322839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:12:01.563 [2024-11-06 09:05:00.504270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.821 [2024-11-06 09:05:00.627423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.821 [2024-11-06 09:05:00.627536] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:01.821 [2024-11-06 09:05:00.627554] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:01.821 [2024-11-06 09:05:00.627571] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57202 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57202 ']' 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57202 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57202 00:12:02.082 killing process with pid 57202 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57202' 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57202 00:12:02.082 09:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57202 00:12:04.613 00:12:04.613 real 0m4.445s 00:12:04.613 user 0m4.783s 00:12:04.613 sys 0m0.611s 00:12:04.613 09:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.613 ************************************ 00:12:04.613 END TEST exit_on_failed_rpc_init 00:12:04.613 ************************************ 00:12:04.613 09:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:04.613 09:05:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:04.613 00:12:04.613 real 0m24.149s 00:12:04.613 user 0m22.975s 00:12:04.613 sys 0m2.383s 00:12:04.613 09:05:03 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.613 ************************************ 00:12:04.613 END TEST skip_rpc 00:12:04.613 ************************************ 00:12:04.613 09:05:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.613 09:05:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:04.613 09:05:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:04.613 09:05:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.613 09:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:04.613 ************************************ 00:12:04.613 START TEST rpc_client 00:12:04.613 ************************************ 00:12:04.613 09:05:03 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:04.613 * Looking for test storage... 00:12:04.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:12:04.613 09:05:03 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:04.613 09:05:03 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:12:04.613 09:05:03 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.876 09:05:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.876 --rc genhtml_branch_coverage=1 00:12:04.876 --rc genhtml_function_coverage=1 00:12:04.876 --rc genhtml_legend=1 00:12:04.876 --rc geninfo_all_blocks=1 00:12:04.876 --rc geninfo_unexecuted_blocks=1 00:12:04.876 00:12:04.876 ' 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.876 --rc genhtml_branch_coverage=1 00:12:04.876 --rc genhtml_function_coverage=1 00:12:04.876 --rc genhtml_legend=1 00:12:04.876 --rc geninfo_all_blocks=1 00:12:04.876 --rc geninfo_unexecuted_blocks=1 00:12:04.876 00:12:04.876 ' 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.876 --rc genhtml_branch_coverage=1 00:12:04.876 --rc genhtml_function_coverage=1 00:12:04.876 --rc genhtml_legend=1 00:12:04.876 --rc geninfo_all_blocks=1 00:12:04.876 --rc geninfo_unexecuted_blocks=1 00:12:04.876 00:12:04.876 ' 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.876 --rc genhtml_branch_coverage=1 00:12:04.876 --rc genhtml_function_coverage=1 00:12:04.876 --rc genhtml_legend=1 00:12:04.876 --rc geninfo_all_blocks=1 00:12:04.876 --rc geninfo_unexecuted_blocks=1 00:12:04.876 00:12:04.876 ' 00:12:04.876 09:05:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:12:04.876 OK 00:12:04.876 09:05:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:04.876 00:12:04.876 real 0m0.317s 00:12:04.876 user 0m0.167s 00:12:04.876 sys 0m0.166s 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.876 09:05:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:04.876 ************************************ 00:12:04.876 END TEST rpc_client 00:12:04.876 ************************************ 00:12:04.876 09:05:03 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:04.876 09:05:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:04.876 09:05:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.876 09:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:04.876 ************************************ 00:12:04.876 START TEST json_config 00:12:04.876 ************************************ 00:12:04.876 09:05:03 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:05.137 09:05:03 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.137 09:05:03 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.137 09:05:03 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.137 09:05:04 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.137 09:05:04 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.137 09:05:04 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.137 09:05:04 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.137 09:05:04 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.137 09:05:04 json_config -- scripts/common.sh@344 -- # case "$op" in 00:12:05.137 09:05:04 json_config -- scripts/common.sh@345 -- # : 1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.137 09:05:04 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.137 09:05:04 json_config -- scripts/common.sh@365 -- # decimal 1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@353 -- # local d=1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.137 09:05:04 json_config -- scripts/common.sh@355 -- # echo 1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.137 09:05:04 json_config -- scripts/common.sh@366 -- # decimal 2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@353 -- # local d=2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.137 09:05:04 json_config -- scripts/common.sh@355 -- # echo 2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.137 09:05:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.137 09:05:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.137 09:05:04 json_config -- scripts/common.sh@368 -- # return 0 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.137 --rc genhtml_branch_coverage=1 00:12:05.137 --rc genhtml_function_coverage=1 00:12:05.137 --rc genhtml_legend=1 00:12:05.137 --rc geninfo_all_blocks=1 00:12:05.137 --rc geninfo_unexecuted_blocks=1 00:12:05.137 00:12:05.137 ' 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.137 --rc genhtml_branch_coverage=1 00:12:05.137 --rc genhtml_function_coverage=1 00:12:05.137 --rc genhtml_legend=1 00:12:05.137 --rc geninfo_all_blocks=1 00:12:05.137 --rc geninfo_unexecuted_blocks=1 00:12:05.137 00:12:05.137 ' 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.137 --rc genhtml_branch_coverage=1 00:12:05.137 --rc genhtml_function_coverage=1 00:12:05.137 --rc genhtml_legend=1 00:12:05.137 --rc geninfo_all_blocks=1 00:12:05.137 --rc geninfo_unexecuted_blocks=1 00:12:05.137 00:12:05.137 ' 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.137 --rc genhtml_branch_coverage=1 00:12:05.137 --rc genhtml_function_coverage=1 00:12:05.137 --rc genhtml_legend=1 00:12:05.137 --rc geninfo_all_blocks=1 00:12:05.137 --rc geninfo_unexecuted_blocks=1 00:12:05.137 00:12:05.137 ' 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c18da4e-01f5-448a-ac6a-0f8254a46070 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3c18da4e-01f5-448a-ac6a-0f8254a46070 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.137 09:05:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.137 09:05:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.137 09:05:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.137 09:05:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.137 09:05:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.137 09:05:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.137 09:05:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.137 09:05:04 json_config -- paths/export.sh@5 -- # export PATH 00:12:05.137 09:05:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@51 -- # : 0 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.137 09:05:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:05.137 WARNING: No tests are enabled so not running JSON configuration tests 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:12:05.137 09:05:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:12:05.137 ************************************ 00:12:05.137 END TEST json_config 00:12:05.137 ************************************ 00:12:05.137 00:12:05.137 real 0m0.239s 00:12:05.137 user 0m0.139s 00:12:05.137 sys 0m0.102s 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.137 09:05:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:05.396 09:05:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:05.396 09:05:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:05.396 09:05:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.396 09:05:04 -- common/autotest_common.sh@10 -- # set +x 00:12:05.396 ************************************ 00:12:05.396 START TEST json_config_extra_key 00:12:05.396 ************************************ 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.396 09:05:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.396 --rc genhtml_branch_coverage=1 00:12:05.396 --rc genhtml_function_coverage=1 00:12:05.396 --rc genhtml_legend=1 00:12:05.396 --rc geninfo_all_blocks=1 00:12:05.396 --rc geninfo_unexecuted_blocks=1 00:12:05.396 00:12:05.396 ' 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.396 --rc genhtml_branch_coverage=1 00:12:05.396 --rc genhtml_function_coverage=1 00:12:05.396 --rc genhtml_legend=1 00:12:05.396 --rc geninfo_all_blocks=1 00:12:05.396 --rc geninfo_unexecuted_blocks=1 00:12:05.396 00:12:05.396 ' 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.396 --rc genhtml_branch_coverage=1 00:12:05.396 --rc genhtml_function_coverage=1 00:12:05.396 --rc genhtml_legend=1 00:12:05.396 --rc geninfo_all_blocks=1 00:12:05.396 --rc geninfo_unexecuted_blocks=1 00:12:05.396 00:12:05.396 ' 00:12:05.396 09:05:04 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.396 --rc genhtml_branch_coverage=1 00:12:05.396 --rc genhtml_function_coverage=1 00:12:05.396 --rc genhtml_legend=1 00:12:05.396 --rc geninfo_all_blocks=1 00:12:05.396 --rc geninfo_unexecuted_blocks=1 00:12:05.396 00:12:05.396 ' 00:12:05.396 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.396 09:05:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c18da4e-01f5-448a-ac6a-0f8254a46070 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3c18da4e-01f5-448a-ac6a-0f8254a46070 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.397 09:05:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.397 09:05:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.397 09:05:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.397 09:05:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.397 09:05:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 09:05:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 09:05:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 09:05:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:05.397 09:05:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.397 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.397 09:05:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:05.397 INFO: launching applications... 00:12:05.397 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57441 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:05.397 Waiting for target to run... 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57441 /var/tmp/spdk_tgt.sock 00:12:05.397 09:05:04 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57441 ']' 00:12:05.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:05.397 09:05:04 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:05.397 09:05:04 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:05.397 09:05:04 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:05.397 09:05:04 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:05.397 09:05:04 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:05.397 09:05:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:05.655 [2024-11-06 09:05:04.530940] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:05.655 [2024-11-06 09:05:04.531282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57441 ] 00:12:05.913 [2024-11-06 09:05:04.927799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.172 [2024-11-06 09:05:05.039666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.739 00:12:06.739 INFO: shutting down applications... 00:12:06.739 09:05:05 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:06.739 09:05:05 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:06.739 09:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:06.739 09:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57441 ]] 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57441 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:06.739 09:05:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:07.305 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:07.305 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:07.305 09:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:07.305 09:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:07.872 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:07.872 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:07.872 09:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:07.872 09:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:08.460 09:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:08.460 09:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:08.460 09:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:08.460 09:05:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:09.028 09:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:09.028 09:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:09.028 09:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:09.028 09:05:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:09.286 09:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:09.286 09:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:09.286 09:05:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:09.286 09:05:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57441 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:09.853 SPDK target shutdown done 00:12:09.853 Success 00:12:09.853 09:05:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:09.853 09:05:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:09.853 00:12:09.853 real 0m4.604s 00:12:09.853 user 0m4.076s 00:12:09.853 sys 0m0.637s 00:12:09.853 09:05:08 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:09.853 09:05:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:09.853 ************************************ 00:12:09.853 END TEST json_config_extra_key 00:12:09.853 ************************************ 00:12:09.853 09:05:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:09.853 09:05:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:09.853 09:05:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:09.853 09:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:09.853 ************************************ 00:12:09.853 START TEST alias_rpc 00:12:09.853 ************************************ 00:12:09.853 09:05:08 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:10.111 * Looking for test storage... 00:12:10.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:10.111 09:05:09 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:10.111 09:05:09 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:10.111 09:05:09 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:10.111 09:05:09 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.111 09:05:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:12:10.111 09:05:09 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.111 09:05:09 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.112 --rc genhtml_branch_coverage=1 00:12:10.112 --rc genhtml_function_coverage=1 00:12:10.112 --rc genhtml_legend=1 00:12:10.112 --rc geninfo_all_blocks=1 00:12:10.112 --rc geninfo_unexecuted_blocks=1 00:12:10.112 00:12:10.112 ' 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.112 --rc genhtml_branch_coverage=1 00:12:10.112 --rc genhtml_function_coverage=1 00:12:10.112 --rc genhtml_legend=1 00:12:10.112 --rc geninfo_all_blocks=1 00:12:10.112 --rc geninfo_unexecuted_blocks=1 00:12:10.112 00:12:10.112 ' 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.112 --rc genhtml_branch_coverage=1 00:12:10.112 --rc genhtml_function_coverage=1 00:12:10.112 --rc genhtml_legend=1 00:12:10.112 --rc geninfo_all_blocks=1 00:12:10.112 --rc geninfo_unexecuted_blocks=1 00:12:10.112 00:12:10.112 ' 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:10.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.112 --rc genhtml_branch_coverage=1 00:12:10.112 --rc genhtml_function_coverage=1 00:12:10.112 --rc genhtml_legend=1 00:12:10.112 --rc geninfo_all_blocks=1 00:12:10.112 --rc geninfo_unexecuted_blocks=1 00:12:10.112 00:12:10.112 ' 00:12:10.112 09:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:10.112 09:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57547 00:12:10.112 09:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:10.112 09:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57547 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57547 ']' 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:10.112 09:05:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.371 [2024-11-06 09:05:09.206981] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:10.371 [2024-11-06 09:05:09.207121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57547 ] 00:12:10.371 [2024-11-06 09:05:09.387332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.630 [2024-11-06 09:05:09.503368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.566 09:05:10 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:11.566 09:05:10 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:11.566 09:05:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:11.825 09:05:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57547 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57547 ']' 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57547 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57547 00:12:11.825 killing process with pid 57547 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57547' 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@971 -- # kill 57547 00:12:11.825 09:05:10 alias_rpc -- common/autotest_common.sh@976 -- # wait 57547 00:12:14.403 ************************************ 00:12:14.403 END TEST alias_rpc 00:12:14.403 ************************************ 00:12:14.403 00:12:14.403 real 0m4.236s 00:12:14.403 user 0m4.204s 00:12:14.403 sys 0m0.618s 00:12:14.403 09:05:13 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.403 09:05:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.403 09:05:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:12:14.403 09:05:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:14.403 09:05:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:14.404 09:05:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.404 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:14.404 ************************************ 00:12:14.404 START TEST spdkcli_tcp 00:12:14.404 ************************************ 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:14.404 * Looking for test storage... 00:12:14.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.404 09:05:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:14.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.404 --rc genhtml_branch_coverage=1 00:12:14.404 --rc genhtml_function_coverage=1 00:12:14.404 --rc genhtml_legend=1 00:12:14.404 --rc geninfo_all_blocks=1 00:12:14.404 --rc geninfo_unexecuted_blocks=1 00:12:14.404 00:12:14.404 ' 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57655 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:14.404 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57655 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57655 ']' 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:14.404 09:05:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.662 [2024-11-06 09:05:13.539068] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:14.662 [2024-11-06 09:05:13.539212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57655 ] 00:12:14.921 [2024-11-06 09:05:13.723328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:14.921 [2024-11-06 09:05:13.849111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.921 [2024-11-06 09:05:13.849147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.857 09:05:14 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:15.857 09:05:14 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:12:15.857 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57677 00:12:15.857 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:15.857 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:16.118 [ 00:12:16.118 "bdev_malloc_delete", 00:12:16.118 "bdev_malloc_create", 00:12:16.118 "bdev_null_resize", 00:12:16.118 "bdev_null_delete", 00:12:16.118 "bdev_null_create", 00:12:16.118 "bdev_nvme_cuse_unregister", 00:12:16.118 "bdev_nvme_cuse_register", 00:12:16.118 "bdev_opal_new_user", 00:12:16.118 "bdev_opal_set_lock_state", 00:12:16.118 "bdev_opal_delete", 00:12:16.118 "bdev_opal_get_info", 00:12:16.118 "bdev_opal_create", 00:12:16.118 "bdev_nvme_opal_revert", 00:12:16.118 "bdev_nvme_opal_init", 00:12:16.118 "bdev_nvme_send_cmd", 00:12:16.118 "bdev_nvme_set_keys", 00:12:16.118 "bdev_nvme_get_path_iostat", 00:12:16.118 "bdev_nvme_get_mdns_discovery_info", 00:12:16.118 "bdev_nvme_stop_mdns_discovery", 00:12:16.118 "bdev_nvme_start_mdns_discovery", 00:12:16.118 "bdev_nvme_set_multipath_policy", 00:12:16.118 "bdev_nvme_set_preferred_path", 00:12:16.118 "bdev_nvme_get_io_paths", 00:12:16.118 "bdev_nvme_remove_error_injection", 00:12:16.118 "bdev_nvme_add_error_injection", 00:12:16.118 "bdev_nvme_get_discovery_info", 00:12:16.118 "bdev_nvme_stop_discovery", 00:12:16.118 "bdev_nvme_start_discovery", 00:12:16.118 "bdev_nvme_get_controller_health_info", 00:12:16.118 "bdev_nvme_disable_controller", 00:12:16.118 "bdev_nvme_enable_controller", 00:12:16.118 "bdev_nvme_reset_controller", 00:12:16.118 "bdev_nvme_get_transport_statistics", 00:12:16.118 "bdev_nvme_apply_firmware", 00:12:16.118 "bdev_nvme_detach_controller", 00:12:16.118 "bdev_nvme_get_controllers", 00:12:16.118 "bdev_nvme_attach_controller", 00:12:16.118 "bdev_nvme_set_hotplug", 00:12:16.118 "bdev_nvme_set_options", 00:12:16.118 "bdev_passthru_delete", 00:12:16.118 "bdev_passthru_create", 00:12:16.118 "bdev_lvol_set_parent_bdev", 00:12:16.118 "bdev_lvol_set_parent", 00:12:16.118 "bdev_lvol_check_shallow_copy", 00:12:16.118 "bdev_lvol_start_shallow_copy", 00:12:16.118 "bdev_lvol_grow_lvstore", 00:12:16.118 "bdev_lvol_get_lvols", 00:12:16.118 "bdev_lvol_get_lvstores", 00:12:16.118 "bdev_lvol_delete", 00:12:16.118 "bdev_lvol_set_read_only", 00:12:16.118 "bdev_lvol_resize", 00:12:16.118 "bdev_lvol_decouple_parent", 00:12:16.118 "bdev_lvol_inflate", 00:12:16.118 "bdev_lvol_rename", 00:12:16.118 "bdev_lvol_clone_bdev", 00:12:16.118 "bdev_lvol_clone", 00:12:16.118 "bdev_lvol_snapshot", 00:12:16.118 "bdev_lvol_create", 00:12:16.118 "bdev_lvol_delete_lvstore", 00:12:16.118 "bdev_lvol_rename_lvstore", 00:12:16.118 "bdev_lvol_create_lvstore", 00:12:16.118 "bdev_raid_set_options", 00:12:16.118 "bdev_raid_remove_base_bdev", 00:12:16.118 "bdev_raid_add_base_bdev", 00:12:16.118 "bdev_raid_delete", 00:12:16.118 "bdev_raid_create", 00:12:16.118 "bdev_raid_get_bdevs", 00:12:16.118 "bdev_error_inject_error", 00:12:16.118 "bdev_error_delete", 00:12:16.118 "bdev_error_create", 00:12:16.118 "bdev_split_delete", 00:12:16.118 "bdev_split_create", 00:12:16.118 "bdev_delay_delete", 00:12:16.118 "bdev_delay_create", 00:12:16.118 "bdev_delay_update_latency", 00:12:16.118 "bdev_zone_block_delete", 00:12:16.118 "bdev_zone_block_create", 00:12:16.118 "blobfs_create", 00:12:16.118 "blobfs_detect", 00:12:16.118 "blobfs_set_cache_size", 00:12:16.118 "bdev_aio_delete", 00:12:16.118 "bdev_aio_rescan", 00:12:16.118 "bdev_aio_create", 00:12:16.118 "bdev_ftl_set_property", 00:12:16.118 "bdev_ftl_get_properties", 00:12:16.118 "bdev_ftl_get_stats", 00:12:16.118 "bdev_ftl_unmap", 00:12:16.118 "bdev_ftl_unload", 00:12:16.118 "bdev_ftl_delete", 00:12:16.118 "bdev_ftl_load", 00:12:16.118 "bdev_ftl_create", 00:12:16.118 "bdev_virtio_attach_controller", 00:12:16.118 "bdev_virtio_scsi_get_devices", 00:12:16.118 "bdev_virtio_detach_controller", 00:12:16.118 "bdev_virtio_blk_set_hotplug", 00:12:16.118 "bdev_iscsi_delete", 00:12:16.118 "bdev_iscsi_create", 00:12:16.118 "bdev_iscsi_set_options", 00:12:16.118 "accel_error_inject_error", 00:12:16.118 "ioat_scan_accel_module", 00:12:16.118 "dsa_scan_accel_module", 00:12:16.118 "iaa_scan_accel_module", 00:12:16.118 "keyring_file_remove_key", 00:12:16.118 "keyring_file_add_key", 00:12:16.118 "keyring_linux_set_options", 00:12:16.118 "fsdev_aio_delete", 00:12:16.118 "fsdev_aio_create", 00:12:16.118 "iscsi_get_histogram", 00:12:16.118 "iscsi_enable_histogram", 00:12:16.118 "iscsi_set_options", 00:12:16.118 "iscsi_get_auth_groups", 00:12:16.118 "iscsi_auth_group_remove_secret", 00:12:16.118 "iscsi_auth_group_add_secret", 00:12:16.118 "iscsi_delete_auth_group", 00:12:16.118 "iscsi_create_auth_group", 00:12:16.118 "iscsi_set_discovery_auth", 00:12:16.118 "iscsi_get_options", 00:12:16.118 "iscsi_target_node_request_logout", 00:12:16.118 "iscsi_target_node_set_redirect", 00:12:16.118 "iscsi_target_node_set_auth", 00:12:16.118 "iscsi_target_node_add_lun", 00:12:16.118 "iscsi_get_stats", 00:12:16.118 "iscsi_get_connections", 00:12:16.118 "iscsi_portal_group_set_auth", 00:12:16.118 "iscsi_start_portal_group", 00:12:16.118 "iscsi_delete_portal_group", 00:12:16.118 "iscsi_create_portal_group", 00:12:16.118 "iscsi_get_portal_groups", 00:12:16.118 "iscsi_delete_target_node", 00:12:16.118 "iscsi_target_node_remove_pg_ig_maps", 00:12:16.118 "iscsi_target_node_add_pg_ig_maps", 00:12:16.118 "iscsi_create_target_node", 00:12:16.118 "iscsi_get_target_nodes", 00:12:16.118 "iscsi_delete_initiator_group", 00:12:16.118 "iscsi_initiator_group_remove_initiators", 00:12:16.118 "iscsi_initiator_group_add_initiators", 00:12:16.118 "iscsi_create_initiator_group", 00:12:16.118 "iscsi_get_initiator_groups", 00:12:16.118 "nvmf_set_crdt", 00:12:16.118 "nvmf_set_config", 00:12:16.118 "nvmf_set_max_subsystems", 00:12:16.118 "nvmf_stop_mdns_prr", 00:12:16.118 "nvmf_publish_mdns_prr", 00:12:16.118 "nvmf_subsystem_get_listeners", 00:12:16.118 "nvmf_subsystem_get_qpairs", 00:12:16.118 "nvmf_subsystem_get_controllers", 00:12:16.118 "nvmf_get_stats", 00:12:16.118 "nvmf_get_transports", 00:12:16.118 "nvmf_create_transport", 00:12:16.118 "nvmf_get_targets", 00:12:16.118 "nvmf_delete_target", 00:12:16.118 "nvmf_create_target", 00:12:16.118 "nvmf_subsystem_allow_any_host", 00:12:16.118 "nvmf_subsystem_set_keys", 00:12:16.118 "nvmf_subsystem_remove_host", 00:12:16.118 "nvmf_subsystem_add_host", 00:12:16.118 "nvmf_ns_remove_host", 00:12:16.118 "nvmf_ns_add_host", 00:12:16.118 "nvmf_subsystem_remove_ns", 00:12:16.118 "nvmf_subsystem_set_ns_ana_group", 00:12:16.118 "nvmf_subsystem_add_ns", 00:12:16.118 "nvmf_subsystem_listener_set_ana_state", 00:12:16.118 "nvmf_discovery_get_referrals", 00:12:16.118 "nvmf_discovery_remove_referral", 00:12:16.118 "nvmf_discovery_add_referral", 00:12:16.118 "nvmf_subsystem_remove_listener", 00:12:16.118 "nvmf_subsystem_add_listener", 00:12:16.118 "nvmf_delete_subsystem", 00:12:16.118 "nvmf_create_subsystem", 00:12:16.118 "nvmf_get_subsystems", 00:12:16.118 "env_dpdk_get_mem_stats", 00:12:16.118 "nbd_get_disks", 00:12:16.118 "nbd_stop_disk", 00:12:16.118 "nbd_start_disk", 00:12:16.118 "ublk_recover_disk", 00:12:16.118 "ublk_get_disks", 00:12:16.118 "ublk_stop_disk", 00:12:16.118 "ublk_start_disk", 00:12:16.118 "ublk_destroy_target", 00:12:16.118 "ublk_create_target", 00:12:16.118 "virtio_blk_create_transport", 00:12:16.118 "virtio_blk_get_transports", 00:12:16.118 "vhost_controller_set_coalescing", 00:12:16.118 "vhost_get_controllers", 00:12:16.118 "vhost_delete_controller", 00:12:16.118 "vhost_create_blk_controller", 00:12:16.118 "vhost_scsi_controller_remove_target", 00:12:16.118 "vhost_scsi_controller_add_target", 00:12:16.118 "vhost_start_scsi_controller", 00:12:16.118 "vhost_create_scsi_controller", 00:12:16.119 "thread_set_cpumask", 00:12:16.119 "scheduler_set_options", 00:12:16.119 "framework_get_governor", 00:12:16.119 "framework_get_scheduler", 00:12:16.119 "framework_set_scheduler", 00:12:16.119 "framework_get_reactors", 00:12:16.119 "thread_get_io_channels", 00:12:16.119 "thread_get_pollers", 00:12:16.119 "thread_get_stats", 00:12:16.119 "framework_monitor_context_switch", 00:12:16.119 "spdk_kill_instance", 00:12:16.119 "log_enable_timestamps", 00:12:16.119 "log_get_flags", 00:12:16.119 "log_clear_flag", 00:12:16.119 "log_set_flag", 00:12:16.119 "log_get_level", 00:12:16.119 "log_set_level", 00:12:16.119 "log_get_print_level", 00:12:16.119 "log_set_print_level", 00:12:16.119 "framework_enable_cpumask_locks", 00:12:16.119 "framework_disable_cpumask_locks", 00:12:16.119 "framework_wait_init", 00:12:16.119 "framework_start_init", 00:12:16.119 "scsi_get_devices", 00:12:16.119 "bdev_get_histogram", 00:12:16.119 "bdev_enable_histogram", 00:12:16.119 "bdev_set_qos_limit", 00:12:16.119 "bdev_set_qd_sampling_period", 00:12:16.119 "bdev_get_bdevs", 00:12:16.119 "bdev_reset_iostat", 00:12:16.119 "bdev_get_iostat", 00:12:16.119 "bdev_examine", 00:12:16.119 "bdev_wait_for_examine", 00:12:16.119 "bdev_set_options", 00:12:16.119 "accel_get_stats", 00:12:16.119 "accel_set_options", 00:12:16.119 "accel_set_driver", 00:12:16.119 "accel_crypto_key_destroy", 00:12:16.119 "accel_crypto_keys_get", 00:12:16.119 "accel_crypto_key_create", 00:12:16.119 "accel_assign_opc", 00:12:16.119 "accel_get_module_info", 00:12:16.119 "accel_get_opc_assignments", 00:12:16.119 "vmd_rescan", 00:12:16.119 "vmd_remove_device", 00:12:16.119 "vmd_enable", 00:12:16.119 "sock_get_default_impl", 00:12:16.119 "sock_set_default_impl", 00:12:16.119 "sock_impl_set_options", 00:12:16.119 "sock_impl_get_options", 00:12:16.119 "iobuf_get_stats", 00:12:16.119 "iobuf_set_options", 00:12:16.119 "keyring_get_keys", 00:12:16.119 "framework_get_pci_devices", 00:12:16.119 "framework_get_config", 00:12:16.119 "framework_get_subsystems", 00:12:16.119 "fsdev_set_opts", 00:12:16.119 "fsdev_get_opts", 00:12:16.119 "trace_get_info", 00:12:16.119 "trace_get_tpoint_group_mask", 00:12:16.119 "trace_disable_tpoint_group", 00:12:16.119 "trace_enable_tpoint_group", 00:12:16.119 "trace_clear_tpoint_mask", 00:12:16.119 "trace_set_tpoint_mask", 00:12:16.119 "notify_get_notifications", 00:12:16.119 "notify_get_types", 00:12:16.119 "spdk_get_version", 00:12:16.119 "rpc_get_methods" 00:12:16.119 ] 00:12:16.119 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:16.119 09:05:14 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.119 09:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.119 09:05:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:16.119 09:05:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57655 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57655 ']' 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57655 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57655 00:12:16.119 killing process with pid 57655 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57655' 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57655 00:12:16.119 09:05:15 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57655 00:12:18.652 ************************************ 00:12:18.652 END TEST spdkcli_tcp 00:12:18.652 ************************************ 00:12:18.652 00:12:18.652 real 0m4.329s 00:12:18.652 user 0m7.685s 00:12:18.652 sys 0m0.682s 00:12:18.652 09:05:17 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.652 09:05:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.652 09:05:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:18.652 09:05:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:18.652 09:05:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.652 09:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:18.652 ************************************ 00:12:18.652 START TEST dpdk_mem_utility 00:12:18.652 ************************************ 00:12:18.652 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:18.912 * Looking for test storage... 00:12:18.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:18.912 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:18.912 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:12:18.912 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:18.912 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.912 09:05:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:12:18.912 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.912 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:18.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.913 --rc genhtml_branch_coverage=1 00:12:18.913 --rc genhtml_function_coverage=1 00:12:18.913 --rc genhtml_legend=1 00:12:18.913 --rc geninfo_all_blocks=1 00:12:18.913 --rc geninfo_unexecuted_blocks=1 00:12:18.913 00:12:18.913 ' 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:18.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.913 --rc genhtml_branch_coverage=1 00:12:18.913 --rc genhtml_function_coverage=1 00:12:18.913 --rc genhtml_legend=1 00:12:18.913 --rc geninfo_all_blocks=1 00:12:18.913 --rc geninfo_unexecuted_blocks=1 00:12:18.913 00:12:18.913 ' 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:18.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.913 --rc genhtml_branch_coverage=1 00:12:18.913 --rc genhtml_function_coverage=1 00:12:18.913 --rc genhtml_legend=1 00:12:18.913 --rc geninfo_all_blocks=1 00:12:18.913 --rc geninfo_unexecuted_blocks=1 00:12:18.913 00:12:18.913 ' 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:18.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.913 --rc genhtml_branch_coverage=1 00:12:18.913 --rc genhtml_function_coverage=1 00:12:18.913 --rc genhtml_legend=1 00:12:18.913 --rc geninfo_all_blocks=1 00:12:18.913 --rc geninfo_unexecuted_blocks=1 00:12:18.913 00:12:18.913 ' 00:12:18.913 09:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:18.913 09:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57782 00:12:18.913 09:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:18.913 09:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57782 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57782 ']' 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.913 09:05:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:18.913 [2024-11-06 09:05:17.934532] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:18.913 [2024-11-06 09:05:17.934853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57782 ] 00:12:19.171 [2024-11-06 09:05:18.117298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.429 [2024-11-06 09:05:18.244048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.366 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.366 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:12:20.366 09:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:20.366 09:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:20.366 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.366 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:20.366 { 00:12:20.366 "filename": "/tmp/spdk_mem_dump.txt" 00:12:20.366 } 00:12:20.366 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.366 09:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:20.366 DPDK memory size 816.000000 MiB in 1 heap(s) 00:12:20.366 1 heaps totaling size 816.000000 MiB 00:12:20.366 size: 816.000000 MiB heap id: 0 00:12:20.366 end heaps---------- 00:12:20.366 9 mempools totaling size 595.772034 MiB 00:12:20.366 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:20.366 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:20.366 size: 92.545471 MiB name: bdev_io_57782 00:12:20.366 size: 50.003479 MiB name: msgpool_57782 00:12:20.366 size: 36.509338 MiB name: fsdev_io_57782 00:12:20.366 size: 21.763794 MiB name: PDU_Pool 00:12:20.366 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:20.366 size: 4.133484 MiB name: evtpool_57782 00:12:20.366 size: 0.026123 MiB name: Session_Pool 00:12:20.366 end mempools------- 00:12:20.366 6 memzones totaling size 4.142822 MiB 00:12:20.366 size: 1.000366 MiB name: RG_ring_0_57782 00:12:20.366 size: 1.000366 MiB name: RG_ring_1_57782 00:12:20.366 size: 1.000366 MiB name: RG_ring_4_57782 00:12:20.366 size: 1.000366 MiB name: RG_ring_5_57782 00:12:20.366 size: 0.125366 MiB name: RG_ring_2_57782 00:12:20.366 size: 0.015991 MiB name: RG_ring_3_57782 00:12:20.366 end memzones------- 00:12:20.366 09:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:20.366 heap id: 0 total size: 816.000000 MiB number of busy elements: 311 number of free elements: 18 00:12:20.366 list of free elements. size: 16.792358 MiB 00:12:20.366 element at address: 0x200006400000 with size: 1.995972 MiB 00:12:20.366 element at address: 0x20000a600000 with size: 1.995972 MiB 00:12:20.366 element at address: 0x200003e00000 with size: 1.991028 MiB 00:12:20.366 element at address: 0x200018d00040 with size: 0.999939 MiB 00:12:20.366 element at address: 0x200019100040 with size: 0.999939 MiB 00:12:20.366 element at address: 0x200019200000 with size: 0.999084 MiB 00:12:20.366 element at address: 0x200031e00000 with size: 0.994324 MiB 00:12:20.366 element at address: 0x200000400000 with size: 0.992004 MiB 00:12:20.366 element at address: 0x200018a00000 with size: 0.959656 MiB 00:12:20.366 element at address: 0x200019500040 with size: 0.936401 MiB 00:12:20.366 element at address: 0x200000200000 with size: 0.716980 MiB 00:12:20.366 element at address: 0x20001ac00000 with size: 0.562927 MiB 00:12:20.366 element at address: 0x200000c00000 with size: 0.490173 MiB 00:12:20.366 element at address: 0x200018e00000 with size: 0.487976 MiB 00:12:20.366 element at address: 0x200019600000 with size: 0.485413 MiB 00:12:20.366 element at address: 0x200012c00000 with size: 0.443237 MiB 00:12:20.366 element at address: 0x200028000000 with size: 0.390442 MiB 00:12:20.366 element at address: 0x200000800000 with size: 0.350891 MiB 00:12:20.366 list of standard malloc elements. size: 199.286743 MiB 00:12:20.366 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:12:20.366 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:12:20.366 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:12:20.366 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:12:20.366 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:12:20.366 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:12:20.366 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:12:20.366 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:12:20.366 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:12:20.366 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:12:20.366 element at address: 0x200012bff040 with size: 0.000305 MiB 00:12:20.366 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:12:20.366 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200000cff000 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff180 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff280 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff380 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff480 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff580 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff680 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff780 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff880 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bff980 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71780 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71880 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71980 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c72080 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012c72180 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:12:20.367 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:12:20.368 element at address: 0x200028063f40 with size: 0.000244 MiB 00:12:20.368 element at address: 0x200028064040 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806af80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b080 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b180 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b280 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b380 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b480 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b580 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b680 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b780 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b880 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806b980 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806be80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c080 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c180 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c280 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c380 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c480 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c580 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c680 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c780 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c880 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806c980 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d080 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d180 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d280 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d380 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d480 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d580 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d680 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d780 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d880 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806d980 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806da80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806db80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806de80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806df80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e080 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e180 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e280 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e380 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e480 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e580 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e680 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e780 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e880 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806e980 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f080 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f180 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f280 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f380 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f480 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f580 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f680 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f780 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f880 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806f980 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:12:20.368 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:12:20.368 list of memzone associated elements. size: 599.920898 MiB 00:12:20.368 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:12:20.368 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:20.368 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:12:20.368 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:20.368 element at address: 0x200012df4740 with size: 92.045105 MiB 00:12:20.368 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57782_0 00:12:20.368 element at address: 0x200000dff340 with size: 48.003113 MiB 00:12:20.368 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57782_0 00:12:20.368 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:12:20.368 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57782_0 00:12:20.368 element at address: 0x2000197be900 with size: 20.255615 MiB 00:12:20.368 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:20.368 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:12:20.368 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:20.368 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:12:20.368 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57782_0 00:12:20.368 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:12:20.368 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57782 00:12:20.368 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:12:20.368 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57782 00:12:20.369 element at address: 0x200018efde00 with size: 1.008179 MiB 00:12:20.369 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:20.369 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:12:20.369 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:20.369 element at address: 0x200018afde00 with size: 1.008179 MiB 00:12:20.369 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:20.369 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:12:20.369 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:20.369 element at address: 0x200000cff100 with size: 1.000549 MiB 00:12:20.369 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57782 00:12:20.369 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:12:20.369 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57782 00:12:20.369 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:12:20.369 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57782 00:12:20.369 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:12:20.369 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57782 00:12:20.369 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:12:20.369 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57782 00:12:20.369 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:12:20.369 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57782 00:12:20.369 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:12:20.369 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:20.369 element at address: 0x200012c72280 with size: 0.500549 MiB 00:12:20.369 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:20.369 element at address: 0x20001967c440 with size: 0.250549 MiB 00:12:20.369 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:20.369 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:12:20.369 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57782 00:12:20.369 element at address: 0x20000085df80 with size: 0.125549 MiB 00:12:20.369 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57782 00:12:20.369 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:12:20.369 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:20.369 element at address: 0x200028064140 with size: 0.023804 MiB 00:12:20.369 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:20.369 element at address: 0x200000859d40 with size: 0.016174 MiB 00:12:20.369 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57782 00:12:20.369 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:12:20.369 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:20.369 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:12:20.369 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57782 00:12:20.369 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:12:20.369 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57782 00:12:20.369 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:12:20.369 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57782 00:12:20.369 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:12:20.369 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:20.369 09:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:20.369 09:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57782 00:12:20.369 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57782 ']' 00:12:20.369 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57782 00:12:20.369 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:12:20.369 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.369 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57782 00:12:20.629 killing process with pid 57782 00:12:20.629 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.629 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.629 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57782' 00:12:20.629 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57782 00:12:20.629 09:05:19 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57782 00:12:23.196 00:12:23.196 real 0m4.385s 00:12:23.196 user 0m4.299s 00:12:23.196 sys 0m0.622s 00:12:23.196 ************************************ 00:12:23.196 END TEST dpdk_mem_utility 00:12:23.196 ************************************ 00:12:23.196 09:05:21 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:23.196 09:05:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:23.196 09:05:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:23.196 09:05:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:23.196 09:05:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:23.196 09:05:22 -- common/autotest_common.sh@10 -- # set +x 00:12:23.196 ************************************ 00:12:23.196 START TEST event 00:12:23.196 ************************************ 00:12:23.196 09:05:22 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:23.196 * Looking for test storage... 00:12:23.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:23.196 09:05:22 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:23.196 09:05:22 event -- common/autotest_common.sh@1691 -- # lcov --version 00:12:23.196 09:05:22 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:23.455 09:05:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.455 09:05:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.455 09:05:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.455 09:05:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.455 09:05:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.455 09:05:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.455 09:05:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.455 09:05:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.455 09:05:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.455 09:05:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.455 09:05:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.455 09:05:22 event -- scripts/common.sh@344 -- # case "$op" in 00:12:23.455 09:05:22 event -- scripts/common.sh@345 -- # : 1 00:12:23.455 09:05:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.455 09:05:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.455 09:05:22 event -- scripts/common.sh@365 -- # decimal 1 00:12:23.455 09:05:22 event -- scripts/common.sh@353 -- # local d=1 00:12:23.455 09:05:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.455 09:05:22 event -- scripts/common.sh@355 -- # echo 1 00:12:23.455 09:05:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.455 09:05:22 event -- scripts/common.sh@366 -- # decimal 2 00:12:23.455 09:05:22 event -- scripts/common.sh@353 -- # local d=2 00:12:23.455 09:05:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.455 09:05:22 event -- scripts/common.sh@355 -- # echo 2 00:12:23.455 09:05:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.455 09:05:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.455 09:05:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.455 09:05:22 event -- scripts/common.sh@368 -- # return 0 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.455 --rc genhtml_branch_coverage=1 00:12:23.455 --rc genhtml_function_coverage=1 00:12:23.455 --rc genhtml_legend=1 00:12:23.455 --rc geninfo_all_blocks=1 00:12:23.455 --rc geninfo_unexecuted_blocks=1 00:12:23.455 00:12:23.455 ' 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.455 --rc genhtml_branch_coverage=1 00:12:23.455 --rc genhtml_function_coverage=1 00:12:23.455 --rc genhtml_legend=1 00:12:23.455 --rc geninfo_all_blocks=1 00:12:23.455 --rc geninfo_unexecuted_blocks=1 00:12:23.455 00:12:23.455 ' 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.455 --rc genhtml_branch_coverage=1 00:12:23.455 --rc genhtml_function_coverage=1 00:12:23.455 --rc genhtml_legend=1 00:12:23.455 --rc geninfo_all_blocks=1 00:12:23.455 --rc geninfo_unexecuted_blocks=1 00:12:23.455 00:12:23.455 ' 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.455 --rc genhtml_branch_coverage=1 00:12:23.455 --rc genhtml_function_coverage=1 00:12:23.455 --rc genhtml_legend=1 00:12:23.455 --rc geninfo_all_blocks=1 00:12:23.455 --rc geninfo_unexecuted_blocks=1 00:12:23.455 00:12:23.455 ' 00:12:23.455 09:05:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:23.455 09:05:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:23.455 09:05:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:12:23.455 09:05:22 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:23.455 09:05:22 event -- common/autotest_common.sh@10 -- # set +x 00:12:23.455 ************************************ 00:12:23.455 START TEST event_perf 00:12:23.455 ************************************ 00:12:23.455 09:05:22 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:23.455 Running I/O for 1 seconds...[2024-11-06 09:05:22.355980] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:23.455 [2024-11-06 09:05:22.356109] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:12:23.712 [2024-11-06 09:05:22.543487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.712 [2024-11-06 09:05:22.695317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.712 [2024-11-06 09:05:22.695475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.712 [2024-11-06 09:05:22.695830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.712 [2024-11-06 09:05:22.696174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.087 Running I/O for 1 seconds... 00:12:25.087 lcore 0: 193007 00:12:25.087 lcore 1: 193006 00:12:25.087 lcore 2: 193008 00:12:25.087 lcore 3: 193008 00:12:25.087 done. 00:12:25.087 00:12:25.087 real 0m1.639s 00:12:25.087 user 0m4.365s 00:12:25.087 sys 0m0.142s 00:12:25.087 09:05:23 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.087 ************************************ 00:12:25.087 END TEST event_perf 00:12:25.087 ************************************ 00:12:25.087 09:05:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:25.087 09:05:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:25.087 09:05:23 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:25.087 09:05:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.087 09:05:23 event -- common/autotest_common.sh@10 -- # set +x 00:12:25.087 ************************************ 00:12:25.087 START TEST event_reactor 00:12:25.087 ************************************ 00:12:25.087 09:05:24 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:25.087 [2024-11-06 09:05:24.065630] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:25.087 [2024-11-06 09:05:24.065761] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57935 ] 00:12:25.346 [2024-11-06 09:05:24.249147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.346 [2024-11-06 09:05:24.376379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.722 test_start 00:12:26.722 oneshot 00:12:26.722 tick 100 00:12:26.722 tick 100 00:12:26.722 tick 250 00:12:26.723 tick 100 00:12:26.723 tick 100 00:12:26.723 tick 100 00:12:26.723 tick 250 00:12:26.723 tick 500 00:12:26.723 tick 100 00:12:26.723 tick 100 00:12:26.723 tick 250 00:12:26.723 tick 100 00:12:26.723 tick 100 00:12:26.723 test_end 00:12:26.723 ************************************ 00:12:26.723 END TEST event_reactor 00:12:26.723 ************************************ 00:12:26.723 00:12:26.723 real 0m1.593s 00:12:26.723 user 0m1.374s 00:12:26.723 sys 0m0.109s 00:12:26.723 09:05:25 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.723 09:05:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:26.723 09:05:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:26.723 09:05:25 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:26.723 09:05:25 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.723 09:05:25 event -- common/autotest_common.sh@10 -- # set +x 00:12:26.723 ************************************ 00:12:26.723 START TEST event_reactor_perf 00:12:26.723 ************************************ 00:12:26.723 09:05:25 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:26.723 [2024-11-06 09:05:25.724384] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:26.723 [2024-11-06 09:05:25.724691] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57971 ] 00:12:26.980 [2024-11-06 09:05:25.906713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.239 [2024-11-06 09:05:26.028265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.615 test_start 00:12:28.615 test_end 00:12:28.615 Performance: 357605 events per second 00:12:28.615 00:12:28.615 real 0m1.590s 00:12:28.615 user 0m1.375s 00:12:28.615 sys 0m0.104s 00:12:28.615 09:05:27 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:28.615 ************************************ 00:12:28.615 END TEST event_reactor_perf 00:12:28.615 ************************************ 00:12:28.615 09:05:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:28.615 09:05:27 event -- event/event.sh@49 -- # uname -s 00:12:28.615 09:05:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:28.615 09:05:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:28.615 09:05:27 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:28.615 09:05:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:28.615 09:05:27 event -- common/autotest_common.sh@10 -- # set +x 00:12:28.615 ************************************ 00:12:28.615 START TEST event_scheduler 00:12:28.615 ************************************ 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:28.615 * Looking for test storage... 00:12:28.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.615 09:05:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:28.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.615 --rc genhtml_branch_coverage=1 00:12:28.615 --rc genhtml_function_coverage=1 00:12:28.615 --rc genhtml_legend=1 00:12:28.615 --rc geninfo_all_blocks=1 00:12:28.615 --rc geninfo_unexecuted_blocks=1 00:12:28.615 00:12:28.615 ' 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:28.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.615 --rc genhtml_branch_coverage=1 00:12:28.615 --rc genhtml_function_coverage=1 00:12:28.615 --rc genhtml_legend=1 00:12:28.615 --rc geninfo_all_blocks=1 00:12:28.615 --rc geninfo_unexecuted_blocks=1 00:12:28.615 00:12:28.615 ' 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:28.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.615 --rc genhtml_branch_coverage=1 00:12:28.615 --rc genhtml_function_coverage=1 00:12:28.615 --rc genhtml_legend=1 00:12:28.615 --rc geninfo_all_blocks=1 00:12:28.615 --rc geninfo_unexecuted_blocks=1 00:12:28.615 00:12:28.615 ' 00:12:28.615 09:05:27 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:28.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.615 --rc genhtml_branch_coverage=1 00:12:28.615 --rc genhtml_function_coverage=1 00:12:28.615 --rc genhtml_legend=1 00:12:28.616 --rc geninfo_all_blocks=1 00:12:28.616 --rc geninfo_unexecuted_blocks=1 00:12:28.616 00:12:28.616 ' 00:12:28.616 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:28.616 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58047 00:12:28.616 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:28.616 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58047 00:12:28.616 09:05:27 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58047 ']' 00:12:28.616 09:05:27 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.616 09:05:27 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:28.616 09:05:27 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.616 09:05:27 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:28.616 09:05:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:28.616 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:28.874 [2024-11-06 09:05:27.700767] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:28.874 [2024-11-06 09:05:27.701412] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58047 ] 00:12:28.874 [2024-11-06 09:05:27.892607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.132 [2024-11-06 09:05:28.017823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.132 [2024-11-06 09:05:28.018001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.132 [2024-11-06 09:05:28.018168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.132 [2024-11-06 09:05:28.018199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:12:29.698 09:05:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:29.698 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:29.698 POWER: Cannot set governor of lcore 0 to userspace 00:12:29.698 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:29.698 POWER: Cannot set governor of lcore 0 to performance 00:12:29.698 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:29.698 POWER: Cannot set governor of lcore 0 to userspace 00:12:29.698 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:29.698 POWER: Cannot set governor of lcore 0 to userspace 00:12:29.698 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:12:29.698 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:29.698 POWER: Unable to set Power Management Environment for lcore 0 00:12:29.698 [2024-11-06 09:05:28.580745] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:12:29.698 [2024-11-06 09:05:28.580771] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:12:29.698 [2024-11-06 09:05:28.580785] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:12:29.698 [2024-11-06 09:05:28.580809] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:29.698 [2024-11-06 09:05:28.580820] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:29.698 [2024-11-06 09:05:28.580832] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.698 09:05:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.698 09:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 [2024-11-06 09:05:28.915486] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:29.956 09:05:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.956 09:05:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:29.956 09:05:28 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:29.956 09:05:28 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.956 09:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 ************************************ 00:12:29.956 START TEST scheduler_create_thread 00:12:29.956 ************************************ 00:12:29.956 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:12:29.956 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:29.956 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.956 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 2 00:12:29.956 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.957 3 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.957 4 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.957 5 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.957 6 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.957 7 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.957 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.215 8 00:12:30.215 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.215 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:30.215 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.215 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.215 9 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.215 10 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.215 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:31.597 09:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.598 09:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:31.598 09:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:31.598 09:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.598 09:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:32.541 ************************************ 00:12:32.541 END TEST scheduler_create_thread 00:12:32.541 ************************************ 00:12:32.541 09:05:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.541 00:12:32.541 real 0m2.617s 00:12:32.541 user 0m0.023s 00:12:32.541 sys 0m0.008s 00:12:32.541 09:05:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:32.541 09:05:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:32.799 09:05:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:32.799 09:05:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58047 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58047 ']' 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58047 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58047 00:12:32.799 killing process with pid 58047 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:12:32.799 09:05:31 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:12:32.800 09:05:31 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58047' 00:12:32.800 09:05:31 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58047 00:12:32.800 09:05:31 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58047 00:12:33.058 [2024-11-06 09:05:32.025653] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:34.435 00:12:34.435 real 0m5.860s 00:12:34.435 user 0m9.916s 00:12:34.435 sys 0m0.585s 00:12:34.435 09:05:33 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.435 ************************************ 00:12:34.435 09:05:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:34.435 END TEST event_scheduler 00:12:34.435 ************************************ 00:12:34.435 09:05:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:34.435 09:05:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:34.435 09:05:33 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:34.435 09:05:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:34.435 09:05:33 event -- common/autotest_common.sh@10 -- # set +x 00:12:34.435 ************************************ 00:12:34.435 START TEST app_repeat 00:12:34.435 ************************************ 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58159 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:34.435 Process app_repeat pid: 58159 00:12:34.435 spdk_app_start Round 0 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58159' 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:34.435 09:05:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:12:34.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58159 ']' 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.435 09:05:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:34.435 [2024-11-06 09:05:33.345054] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:34.435 [2024-11-06 09:05:33.345183] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58159 ] 00:12:34.694 [2024-11-06 09:05:33.527896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:34.694 [2024-11-06 09:05:33.650044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.694 [2024-11-06 09:05:33.650075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.261 09:05:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.261 09:05:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:35.261 09:05:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:35.829 Malloc0 00:12:35.829 09:05:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:36.086 Malloc1 00:12:36.086 09:05:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:36.086 09:05:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.086 09:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:36.086 09:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.087 09:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:36.344 /dev/nbd0 00:12:36.344 09:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.344 09:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:36.344 1+0 records in 00:12:36.344 1+0 records out 00:12:36.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346729 s, 11.8 MB/s 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:36.344 09:05:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:36.344 09:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.344 09:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.344 09:05:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:36.602 /dev/nbd1 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:36.602 1+0 records in 00:12:36.602 1+0 records out 00:12:36.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332589 s, 12.3 MB/s 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:36.602 09:05:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.602 09:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:36.860 { 00:12:36.860 "nbd_device": "/dev/nbd0", 00:12:36.860 "bdev_name": "Malloc0" 00:12:36.860 }, 00:12:36.860 { 00:12:36.860 "nbd_device": "/dev/nbd1", 00:12:36.860 "bdev_name": "Malloc1" 00:12:36.860 } 00:12:36.860 ]' 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:36.860 { 00:12:36.860 "nbd_device": "/dev/nbd0", 00:12:36.860 "bdev_name": "Malloc0" 00:12:36.860 }, 00:12:36.860 { 00:12:36.860 "nbd_device": "/dev/nbd1", 00:12:36.860 "bdev_name": "Malloc1" 00:12:36.860 } 00:12:36.860 ]' 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:36.860 /dev/nbd1' 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:36.860 /dev/nbd1' 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.860 09:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.861 09:05:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:36.861 09:05:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:36.861 09:05:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:36.861 09:05:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:36.861 256+0 records in 00:12:36.861 256+0 records out 00:12:36.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577296 s, 182 MB/s 00:12:36.861 09:05:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.861 09:05:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:37.118 256+0 records in 00:12:37.119 256+0 records out 00:12:37.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252715 s, 41.5 MB/s 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:37.119 256+0 records in 00:12:37.119 256+0 records out 00:12:37.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290359 s, 36.1 MB/s 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.119 09:05:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.378 09:05:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:37.637 09:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:37.895 09:05:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:37.895 09:05:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:38.155 09:05:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:39.530 [2024-11-06 09:05:38.340452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:39.530 [2024-11-06 09:05:38.461120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.530 [2024-11-06 09:05:38.461120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.788 [2024-11-06 09:05:38.663293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:39.788 [2024-11-06 09:05:38.663378] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:41.172 spdk_app_start Round 1 00:12:41.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:41.172 09:05:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:41.172 09:05:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:41.172 09:05:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:12:41.172 09:05:40 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58159 ']' 00:12:41.172 09:05:40 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:41.172 09:05:40 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:41.172 09:05:40 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:41.172 09:05:40 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:41.172 09:05:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:41.430 09:05:40 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.430 09:05:40 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:41.430 09:05:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:41.688 Malloc0 00:12:41.688 09:05:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:41.952 Malloc1 00:12:41.952 09:05:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.952 09:05:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:42.241 /dev/nbd0 00:12:42.241 09:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.241 09:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:42.241 1+0 records in 00:12:42.241 1+0 records out 00:12:42.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490293 s, 8.4 MB/s 00:12:42.241 09:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.501 09:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:42.501 09:05:41 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.501 09:05:41 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:42.501 09:05:41 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:42.501 09:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.501 09:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.501 09:05:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:42.501 /dev/nbd1 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:42.759 1+0 records in 00:12:42.759 1+0 records out 00:12:42.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459561 s, 8.9 MB/s 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:42.759 09:05:41 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.759 09:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.017 09:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:43.017 { 00:12:43.017 "nbd_device": "/dev/nbd0", 00:12:43.017 "bdev_name": "Malloc0" 00:12:43.017 }, 00:12:43.017 { 00:12:43.017 "nbd_device": "/dev/nbd1", 00:12:43.017 "bdev_name": "Malloc1" 00:12:43.017 } 00:12:43.017 ]' 00:12:43.017 09:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:43.017 { 00:12:43.017 "nbd_device": "/dev/nbd0", 00:12:43.017 "bdev_name": "Malloc0" 00:12:43.017 }, 00:12:43.017 { 00:12:43.017 "nbd_device": "/dev/nbd1", 00:12:43.018 "bdev_name": "Malloc1" 00:12:43.018 } 00:12:43.018 ]' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:43.018 /dev/nbd1' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:43.018 /dev/nbd1' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:43.018 256+0 records in 00:12:43.018 256+0 records out 00:12:43.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135366 s, 77.5 MB/s 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:43.018 256+0 records in 00:12:43.018 256+0 records out 00:12:43.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029249 s, 35.8 MB/s 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:43.018 256+0 records in 00:12:43.018 256+0 records out 00:12:43.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328954 s, 31.9 MB/s 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.018 09:05:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.018 09:05:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.276 09:05:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.535 09:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:43.793 09:05:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:43.793 09:05:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:44.360 09:05:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:45.293 [2024-11-06 09:05:44.328177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.549 [2024-11-06 09:05:44.451878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.549 [2024-11-06 09:05:44.451879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.808 [2024-11-06 09:05:44.661192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:45.808 [2024-11-06 09:05:44.661285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:47.183 spdk_app_start Round 2 00:12:47.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:47.183 09:05:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:47.183 09:05:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:47.183 09:05:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:12:47.183 09:05:46 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58159 ']' 00:12:47.183 09:05:46 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:47.183 09:05:46 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:47.183 09:05:46 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:47.183 09:05:46 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:47.183 09:05:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:47.442 09:05:46 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.442 09:05:46 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:47.442 09:05:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:47.701 Malloc0 00:12:47.701 09:05:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:47.958 Malloc1 00:12:47.958 09:05:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:47.958 09:05:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.958 09:05:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:47.958 09:05:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:47.959 09:05:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:48.217 /dev/nbd0 00:12:48.217 09:05:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.217 09:05:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:48.217 1+0 records in 00:12:48.217 1+0 records out 00:12:48.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360827 s, 11.4 MB/s 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:48.217 09:05:47 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:48.217 09:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.217 09:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.217 09:05:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:48.475 /dev/nbd1 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:48.475 1+0 records in 00:12:48.475 1+0 records out 00:12:48.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357553 s, 11.5 MB/s 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:48.475 09:05:47 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.475 09:05:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:48.733 09:05:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:48.733 { 00:12:48.733 "nbd_device": "/dev/nbd0", 00:12:48.733 "bdev_name": "Malloc0" 00:12:48.733 }, 00:12:48.733 { 00:12:48.733 "nbd_device": "/dev/nbd1", 00:12:48.733 "bdev_name": "Malloc1" 00:12:48.733 } 00:12:48.733 ]' 00:12:48.733 09:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:48.733 09:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:48.733 { 00:12:48.733 "nbd_device": "/dev/nbd0", 00:12:48.733 "bdev_name": "Malloc0" 00:12:48.733 }, 00:12:48.733 { 00:12:48.733 "nbd_device": "/dev/nbd1", 00:12:48.733 "bdev_name": "Malloc1" 00:12:48.733 } 00:12:48.733 ]' 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:48.992 /dev/nbd1' 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:48.992 /dev/nbd1' 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:48.992 09:05:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:48.993 256+0 records in 00:12:48.993 256+0 records out 00:12:48.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115729 s, 90.6 MB/s 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:48.993 256+0 records in 00:12:48.993 256+0 records out 00:12:48.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282583 s, 37.1 MB/s 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:48.993 256+0 records in 00:12:48.993 256+0 records out 00:12:48.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277889 s, 37.7 MB/s 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.993 09:05:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.250 09:05:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.509 09:05:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:49.767 09:05:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:49.767 09:05:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:50.333 09:05:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:51.265 [2024-11-06 09:05:50.278205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:51.521 [2024-11-06 09:05:50.396695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.521 [2024-11-06 09:05:50.396695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.779 [2024-11-06 09:05:50.595512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:51.779 [2024-11-06 09:05:50.595583] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:53.188 09:05:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:12:53.188 09:05:52 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58159 ']' 00:12:53.188 09:05:52 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:53.188 09:05:52 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:53.189 09:05:52 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:53.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:53.189 09:05:52 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:53.189 09:05:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:53.447 09:05:52 event.app_repeat -- event/event.sh@39 -- # killprocess 58159 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58159 ']' 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58159 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58159 00:12:53.447 killing process with pid 58159 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58159' 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58159 00:12:53.447 09:05:52 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58159 00:12:54.824 spdk_app_start is called in Round 0. 00:12:54.824 Shutdown signal received, stop current app iteration 00:12:54.824 Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 reinitialization... 00:12:54.824 spdk_app_start is called in Round 1. 00:12:54.824 Shutdown signal received, stop current app iteration 00:12:54.824 Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 reinitialization... 00:12:54.824 spdk_app_start is called in Round 2. 00:12:54.824 Shutdown signal received, stop current app iteration 00:12:54.824 Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 reinitialization... 00:12:54.824 spdk_app_start is called in Round 3. 00:12:54.824 Shutdown signal received, stop current app iteration 00:12:54.824 09:05:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:54.824 09:05:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:54.824 00:12:54.824 real 0m20.185s 00:12:54.824 user 0m43.367s 00:12:54.824 sys 0m3.245s 00:12:54.824 09:05:53 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.824 09:05:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 ************************************ 00:12:54.824 END TEST app_repeat 00:12:54.824 ************************************ 00:12:54.824 09:05:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:54.824 09:05:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:54.824 09:05:53 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:54.824 09:05:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.824 09:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 ************************************ 00:12:54.824 START TEST cpu_locks 00:12:54.824 ************************************ 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:54.824 * Looking for test storage... 00:12:54.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.824 09:05:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.824 --rc genhtml_branch_coverage=1 00:12:54.824 --rc genhtml_function_coverage=1 00:12:54.824 --rc genhtml_legend=1 00:12:54.824 --rc geninfo_all_blocks=1 00:12:54.824 --rc geninfo_unexecuted_blocks=1 00:12:54.824 00:12:54.824 ' 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.824 --rc genhtml_branch_coverage=1 00:12:54.824 --rc genhtml_function_coverage=1 00:12:54.824 --rc genhtml_legend=1 00:12:54.824 --rc geninfo_all_blocks=1 00:12:54.824 --rc geninfo_unexecuted_blocks=1 00:12:54.824 00:12:54.824 ' 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.824 --rc genhtml_branch_coverage=1 00:12:54.824 --rc genhtml_function_coverage=1 00:12:54.824 --rc genhtml_legend=1 00:12:54.824 --rc geninfo_all_blocks=1 00:12:54.824 --rc geninfo_unexecuted_blocks=1 00:12:54.824 00:12:54.824 ' 00:12:54.824 09:05:53 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.824 --rc genhtml_branch_coverage=1 00:12:54.824 --rc genhtml_function_coverage=1 00:12:54.824 --rc genhtml_legend=1 00:12:54.824 --rc geninfo_all_blocks=1 00:12:54.824 --rc geninfo_unexecuted_blocks=1 00:12:54.824 00:12:54.824 ' 00:12:54.824 09:05:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:54.825 09:05:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:54.825 09:05:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:54.825 09:05:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:54.825 09:05:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:54.825 09:05:53 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.825 09:05:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:54.825 ************************************ 00:12:54.825 START TEST default_locks 00:12:54.825 ************************************ 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58617 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58617 00:12:54.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58617 ']' 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.825 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:55.082 [2024-11-06 09:05:53.891597] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:55.082 [2024-11-06 09:05:53.891724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58617 ] 00:12:55.082 [2024-11-06 09:05:54.060275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.340 [2024-11-06 09:05:54.180923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.275 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:56.275 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:12:56.275 09:05:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58617 00:12:56.275 09:05:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58617 00:12:56.275 09:05:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58617 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58617 ']' 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58617 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58617 00:12:56.534 killing process with pid 58617 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58617' 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58617 00:12:56.534 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58617 00:12:59.065 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58617 00:12:59.065 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:12:59.065 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58617 00:12:59.065 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:59.065 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58617 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58617 ']' 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.066 ERROR: process (pid: 58617) is no longer running 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:59.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58617) - No such process 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:59.066 ************************************ 00:12:59.066 END TEST default_locks 00:12:59.066 ************************************ 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:59.066 00:12:59.066 real 0m4.157s 00:12:59.066 user 0m4.110s 00:12:59.066 sys 0m0.680s 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:59.066 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:59.066 09:05:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:59.066 09:05:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:59.066 09:05:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:59.066 09:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:59.066 ************************************ 00:12:59.066 START TEST default_locks_via_rpc 00:12:59.066 ************************************ 00:12:59.066 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58692 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58692 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58692 ']' 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:59.066 09:05:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.324 [2024-11-06 09:05:58.111258] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:12:59.324 [2024-11-06 09:05:58.111399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58692 ] 00:12:59.324 [2024-11-06 09:05:58.282501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.583 [2024-11-06 09:05:58.406028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.520 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.520 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:00.520 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:00.520 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58692 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58692 00:13:00.521 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:00.785 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58692 00:13:00.785 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58692 ']' 00:13:00.785 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58692 00:13:00.785 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:13:00.785 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:00.785 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58692 00:13:01.046 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:01.046 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:01.046 killing process with pid 58692 00:13:01.046 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58692' 00:13:01.046 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58692 00:13:01.046 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58692 00:13:03.578 00:13:03.578 real 0m4.238s 00:13:03.578 user 0m4.180s 00:13:03.578 sys 0m0.728s 00:13:03.578 09:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.578 ************************************ 00:13:03.578 END TEST default_locks_via_rpc 00:13:03.578 ************************************ 00:13:03.578 09:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.578 09:06:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:03.578 09:06:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:03.578 09:06:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.578 09:06:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:03.578 ************************************ 00:13:03.578 START TEST non_locking_app_on_locked_coremask 00:13:03.578 ************************************ 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58766 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58766 /var/tmp/spdk.sock 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58766 ']' 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.578 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:03.578 [2024-11-06 09:06:02.418205] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:03.578 [2024-11-06 09:06:02.418555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ] 00:13:03.578 [2024-11-06 09:06:02.594527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.837 [2024-11-06 09:06:02.719113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58782 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58782 /var/tmp/spdk2.sock 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58782 ']' 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:04.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.775 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:04.775 [2024-11-06 09:06:03.726326] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:04.775 [2024-11-06 09:06:03.727244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58782 ] 00:13:05.035 [2024-11-06 09:06:03.914762] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:05.035 [2024-11-06 09:06:03.914833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.294 [2024-11-06 09:06:04.154578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.828 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:07.828 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:07.828 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58766 00:13:07.828 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58766 00:13:07.828 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58766 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58766 ']' 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58766 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58766 00:13:08.396 killing process with pid 58766 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58766' 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58766 00:13:08.396 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58766 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58782 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58782 ']' 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58782 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58782 00:13:13.678 killing process with pid 58782 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58782' 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58782 00:13:13.678 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58782 00:13:15.582 00:13:15.582 real 0m12.259s 00:13:15.582 user 0m12.579s 00:13:15.582 sys 0m1.484s 00:13:15.582 09:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.582 09:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:15.582 ************************************ 00:13:15.582 END TEST non_locking_app_on_locked_coremask 00:13:15.582 ************************************ 00:13:15.841 09:06:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:15.841 09:06:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:15.841 09:06:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.841 09:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:15.841 ************************************ 00:13:15.841 START TEST locking_app_on_unlocked_coremask 00:13:15.841 ************************************ 00:13:15.841 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:13:15.841 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:15.841 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58939 00:13:15.841 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58939 /var/tmp/spdk.sock 00:13:15.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.841 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58939 ']' 00:13:15.842 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.842 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.842 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.842 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.842 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:15.842 [2024-11-06 09:06:14.794934] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:15.842 [2024-11-06 09:06:14.795110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58939 ] 00:13:16.100 [2024-11-06 09:06:14.982342] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:16.100 [2024-11-06 09:06:14.982398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.100 [2024-11-06 09:06:15.111434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58960 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58960 /var/tmp/spdk2.sock 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58960 ']' 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:17.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:17.474 09:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:17.474 [2024-11-06 09:06:16.187384] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:17.474 [2024-11-06 09:06:16.187725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58960 ] 00:13:17.474 [2024-11-06 09:06:16.374953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.733 [2024-11-06 09:06:16.640757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.265 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.265 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:20.265 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58960 00:13:20.265 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58960 00:13:20.265 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58939 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58939 ']' 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58939 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58939 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58939' 00:13:20.830 killing process with pid 58939 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58939 00:13:20.830 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58939 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58960 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58960 ']' 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58960 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58960 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58960' 00:13:26.099 killing process with pid 58960 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58960 00:13:26.099 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58960 00:13:28.632 00:13:28.632 real 0m12.417s 00:13:28.632 user 0m12.820s 00:13:28.632 sys 0m1.447s 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:28.632 ************************************ 00:13:28.632 END TEST locking_app_on_unlocked_coremask 00:13:28.632 ************************************ 00:13:28.632 09:06:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:28.632 09:06:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:28.632 09:06:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:28.632 09:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:28.632 ************************************ 00:13:28.632 START TEST locking_app_on_locked_coremask 00:13:28.632 ************************************ 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59116 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59116 /var/tmp/spdk.sock 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59116 ']' 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.632 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:28.632 [2024-11-06 09:06:27.252212] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:28.632 [2024-11-06 09:06:27.252412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:13:28.632 [2024-11-06 09:06:27.433600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.632 [2024-11-06 09:06:27.557754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59132 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59132 /var/tmp/spdk2.sock 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59132 /var/tmp/spdk2.sock 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59132 /var/tmp/spdk2.sock 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59132 ']' 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:29.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.565 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:29.825 [2024-11-06 09:06:28.611991] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:29.825 [2024-11-06 09:06:28.612374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:13:29.825 [2024-11-06 09:06:28.802131] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59116 has claimed it. 00:13:29.825 [2024-11-06 09:06:28.802218] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:30.392 ERROR: process (pid: 59132) is no longer running 00:13:30.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59132) - No such process 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59116 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59116 00:13:30.392 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59116 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59116 ']' 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59116 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59116 00:13:30.971 killing process with pid 59116 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59116' 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59116 00:13:30.971 09:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59116 00:13:33.523 ************************************ 00:13:33.523 END TEST locking_app_on_locked_coremask 00:13:33.523 ************************************ 00:13:33.523 00:13:33.523 real 0m5.078s 00:13:33.523 user 0m5.256s 00:13:33.523 sys 0m0.936s 00:13:33.523 09:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.523 09:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:33.523 09:06:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:33.523 09:06:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:33.523 09:06:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:33.523 09:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:33.524 ************************************ 00:13:33.524 START TEST locking_overlapped_coremask 00:13:33.524 ************************************ 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59207 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59207 /var/tmp/spdk.sock 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59207 ']' 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.524 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:33.524 [2024-11-06 09:06:32.399566] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:33.524 [2024-11-06 09:06:32.399698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:13:33.780 [2024-11-06 09:06:32.583449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.780 [2024-11-06 09:06:32.702395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.780 [2024-11-06 09:06:32.702534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.780 [2024-11-06 09:06:32.702567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59225 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59225 /var/tmp/spdk2.sock 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59225 /var/tmp/spdk2.sock 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59225 /var/tmp/spdk2.sock 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59225 ']' 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:34.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:34.712 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:34.712 [2024-11-06 09:06:33.734860] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:34.712 [2024-11-06 09:06:33.735933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:13:34.969 [2024-11-06 09:06:33.943945] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59207 has claimed it. 00:13:34.969 [2024-11-06 09:06:33.944015] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:35.535 ERROR: process (pid: 59225) is no longer running 00:13:35.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59225) - No such process 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59207 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59207 ']' 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59207 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59207 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59207' 00:13:35.535 killing process with pid 59207 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59207 00:13:35.535 09:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59207 00:13:38.067 00:13:38.067 real 0m4.571s 00:13:38.067 user 0m12.530s 00:13:38.068 sys 0m0.671s 00:13:38.068 ************************************ 00:13:38.068 END TEST locking_overlapped_coremask 00:13:38.068 ************************************ 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:38.068 09:06:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:38.068 09:06:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:38.068 09:06:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.068 09:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:38.068 ************************************ 00:13:38.068 START TEST locking_overlapped_coremask_via_rpc 00:13:38.068 ************************************ 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59295 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59295 /var/tmp/spdk.sock 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59295 ']' 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.068 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.068 [2024-11-06 09:06:37.045718] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:38.068 [2024-11-06 09:06:37.045850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59295 ] 00:13:38.325 [2024-11-06 09:06:37.226866] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:38.325 [2024-11-06 09:06:37.226924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.584 [2024-11-06 09:06:37.370997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.584 [2024-11-06 09:06:37.371103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.584 [2024-11-06 09:06:37.371115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59318 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59318 ']' 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.521 09:06:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:39.521 [2024-11-06 09:06:38.400993] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:39.521 [2024-11-06 09:06:38.401136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 00:13:39.779 [2024-11-06 09:06:38.590644] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:39.779 [2024-11-06 09:06:38.590711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.038 [2024-11-06 09:06:38.845846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.038 [2024-11-06 09:06:38.845976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.038 [2024-11-06 09:06:38.846007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.572 [2024-11-06 09:06:41.046517] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59295 has claimed it. 00:13:42.572 request: 00:13:42.572 { 00:13:42.572 "method": "framework_enable_cpumask_locks", 00:13:42.572 "req_id": 1 00:13:42.572 } 00:13:42.572 Got JSON-RPC error response 00:13:42.572 response: 00:13:42.572 { 00:13:42.572 "code": -32603, 00:13:42.572 "message": "Failed to claim CPU core: 2" 00:13:42.572 } 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59295 /var/tmp/spdk.sock 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59295 ']' 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:13:42.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59318 ']' 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:42.572 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:42.573 ************************************ 00:13:42.573 00:13:42.573 real 0m4.588s 00:13:42.573 user 0m1.354s 00:13:42.573 sys 0m0.240s 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.573 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.573 END TEST locking_overlapped_coremask_via_rpc 00:13:42.573 ************************************ 00:13:42.573 09:06:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:42.573 09:06:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59295 ]] 00:13:42.573 09:06:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59295 00:13:42.573 09:06:41 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59295 ']' 00:13:42.573 09:06:41 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59295 00:13:42.573 09:06:41 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:13:42.573 09:06:41 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:42.573 09:06:41 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59295 00:13:42.834 killing process with pid 59295 00:13:42.834 09:06:41 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:42.834 09:06:41 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:42.834 09:06:41 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59295' 00:13:42.834 09:06:41 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59295 00:13:42.834 09:06:41 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59295 00:13:45.368 09:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59318 ]] 00:13:45.368 09:06:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59318 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59318 ']' 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59318 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59318 00:13:45.368 killing process with pid 59318 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59318' 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59318 00:13:45.368 09:06:44 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59318 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59295 ]] 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59295 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59295 ']' 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59295 00:13:47.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59295) - No such process 00:13:47.901 Process with pid 59295 is not found 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59295 is not found' 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59318 ]] 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59318 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59318 ']' 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59318 00:13:47.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59318) - No such process 00:13:47.901 Process with pid 59318 is not found 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59318 is not found' 00:13:47.901 09:06:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:47.901 00:13:47.901 real 0m52.994s 00:13:47.901 user 1m29.577s 00:13:47.901 sys 0m7.521s 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.901 09:06:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:47.901 ************************************ 00:13:47.901 END TEST cpu_locks 00:13:47.901 ************************************ 00:13:47.901 00:13:47.902 real 1m24.536s 00:13:47.902 user 2m30.250s 00:13:47.902 sys 0m12.108s 00:13:47.902 09:06:46 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.902 09:06:46 event -- common/autotest_common.sh@10 -- # set +x 00:13:47.902 ************************************ 00:13:47.902 END TEST event 00:13:47.902 ************************************ 00:13:47.902 09:06:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:47.902 09:06:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:47.902 09:06:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:47.902 09:06:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.902 ************************************ 00:13:47.902 START TEST thread 00:13:47.902 ************************************ 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:47.902 * Looking for test storage... 00:13:47.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:47.902 09:06:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.902 09:06:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.902 09:06:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.902 09:06:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.902 09:06:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.902 09:06:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.902 09:06:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.902 09:06:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.902 09:06:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.902 09:06:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.902 09:06:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.902 09:06:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:47.902 09:06:46 thread -- scripts/common.sh@345 -- # : 1 00:13:47.902 09:06:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.902 09:06:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.902 09:06:46 thread -- scripts/common.sh@365 -- # decimal 1 00:13:47.902 09:06:46 thread -- scripts/common.sh@353 -- # local d=1 00:13:47.902 09:06:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.902 09:06:46 thread -- scripts/common.sh@355 -- # echo 1 00:13:47.902 09:06:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.902 09:06:46 thread -- scripts/common.sh@366 -- # decimal 2 00:13:47.902 09:06:46 thread -- scripts/common.sh@353 -- # local d=2 00:13:47.902 09:06:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.902 09:06:46 thread -- scripts/common.sh@355 -- # echo 2 00:13:47.902 09:06:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.902 09:06:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.902 09:06:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.902 09:06:46 thread -- scripts/common.sh@368 -- # return 0 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.902 --rc genhtml_branch_coverage=1 00:13:47.902 --rc genhtml_function_coverage=1 00:13:47.902 --rc genhtml_legend=1 00:13:47.902 --rc geninfo_all_blocks=1 00:13:47.902 --rc geninfo_unexecuted_blocks=1 00:13:47.902 00:13:47.902 ' 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.902 --rc genhtml_branch_coverage=1 00:13:47.902 --rc genhtml_function_coverage=1 00:13:47.902 --rc genhtml_legend=1 00:13:47.902 --rc geninfo_all_blocks=1 00:13:47.902 --rc geninfo_unexecuted_blocks=1 00:13:47.902 00:13:47.902 ' 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.902 --rc genhtml_branch_coverage=1 00:13:47.902 --rc genhtml_function_coverage=1 00:13:47.902 --rc genhtml_legend=1 00:13:47.902 --rc geninfo_all_blocks=1 00:13:47.902 --rc geninfo_unexecuted_blocks=1 00:13:47.902 00:13:47.902 ' 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.902 --rc genhtml_branch_coverage=1 00:13:47.902 --rc genhtml_function_coverage=1 00:13:47.902 --rc genhtml_legend=1 00:13:47.902 --rc geninfo_all_blocks=1 00:13:47.902 --rc geninfo_unexecuted_blocks=1 00:13:47.902 00:13:47.902 ' 00:13:47.902 09:06:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:47.902 09:06:46 thread -- common/autotest_common.sh@10 -- # set +x 00:13:47.902 ************************************ 00:13:47.902 START TEST thread_poller_perf 00:13:47.902 ************************************ 00:13:47.902 09:06:46 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:47.902 [2024-11-06 09:06:46.936167] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:47.902 [2024-11-06 09:06:46.936304] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:13:48.160 [2024-11-06 09:06:47.118695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.419 [2024-11-06 09:06:47.241673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.419 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:49.796 [2024-11-06T09:06:48.836Z] ====================================== 00:13:49.796 [2024-11-06T09:06:48.836Z] busy:2500989586 (cyc) 00:13:49.796 [2024-11-06T09:06:48.836Z] total_run_count: 388000 00:13:49.796 [2024-11-06T09:06:48.836Z] tsc_hz: 2490000000 (cyc) 00:13:49.796 [2024-11-06T09:06:48.836Z] ====================================== 00:13:49.796 [2024-11-06T09:06:48.836Z] poller_cost: 6445 (cyc), 2588 (nsec) 00:13:49.796 00:13:49.796 real 0m1.616s 00:13:49.796 user 0m1.389s 00:13:49.796 sys 0m0.118s 00:13:49.796 09:06:48 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:49.796 09:06:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:49.796 ************************************ 00:13:49.796 END TEST thread_poller_perf 00:13:49.796 ************************************ 00:13:49.796 09:06:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:49.796 09:06:48 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:13:49.796 09:06:48 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:49.796 09:06:48 thread -- common/autotest_common.sh@10 -- # set +x 00:13:49.796 ************************************ 00:13:49.796 START TEST thread_poller_perf 00:13:49.796 ************************************ 00:13:49.796 09:06:48 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:49.796 [2024-11-06 09:06:48.620722] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:49.796 [2024-11-06 09:06:48.620838] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:13:49.796 [2024-11-06 09:06:48.801982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.055 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:50.055 [2024-11-06 09:06:48.922591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.430 [2024-11-06T09:06:50.470Z] ====================================== 00:13:51.430 [2024-11-06T09:06:50.470Z] busy:2493964176 (cyc) 00:13:51.430 [2024-11-06T09:06:50.470Z] total_run_count: 5030000 00:13:51.430 [2024-11-06T09:06:50.470Z] tsc_hz: 2490000000 (cyc) 00:13:51.430 [2024-11-06T09:06:50.470Z] ====================================== 00:13:51.430 [2024-11-06T09:06:50.470Z] poller_cost: 495 (cyc), 198 (nsec) 00:13:51.430 00:13:51.430 real 0m1.586s 00:13:51.430 user 0m1.372s 00:13:51.430 sys 0m0.106s 00:13:51.430 09:06:50 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.430 09:06:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:51.430 ************************************ 00:13:51.430 END TEST thread_poller_perf 00:13:51.430 ************************************ 00:13:51.430 09:06:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:51.430 00:13:51.430 real 0m3.560s 00:13:51.430 user 0m2.930s 00:13:51.430 sys 0m0.421s 00:13:51.430 09:06:50 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.430 09:06:50 thread -- common/autotest_common.sh@10 -- # set +x 00:13:51.430 ************************************ 00:13:51.430 END TEST thread 00:13:51.430 ************************************ 00:13:51.430 09:06:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:13:51.430 09:06:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:51.430 09:06:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:51.430 09:06:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.430 09:06:50 -- common/autotest_common.sh@10 -- # set +x 00:13:51.430 ************************************ 00:13:51.430 START TEST app_cmdline 00:13:51.430 ************************************ 00:13:51.430 09:06:50 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:51.430 * Looking for test storage... 00:13:51.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:51.430 09:06:50 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:51.430 09:06:50 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:13:51.430 09:06:50 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.689 09:06:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.689 --rc genhtml_branch_coverage=1 00:13:51.689 --rc genhtml_function_coverage=1 00:13:51.689 --rc genhtml_legend=1 00:13:51.689 --rc geninfo_all_blocks=1 00:13:51.689 --rc geninfo_unexecuted_blocks=1 00:13:51.689 00:13:51.689 ' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.689 --rc genhtml_branch_coverage=1 00:13:51.689 --rc genhtml_function_coverage=1 00:13:51.689 --rc genhtml_legend=1 00:13:51.689 --rc geninfo_all_blocks=1 00:13:51.689 --rc geninfo_unexecuted_blocks=1 00:13:51.689 00:13:51.689 ' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.689 --rc genhtml_branch_coverage=1 00:13:51.689 --rc genhtml_function_coverage=1 00:13:51.689 --rc genhtml_legend=1 00:13:51.689 --rc geninfo_all_blocks=1 00:13:51.689 --rc geninfo_unexecuted_blocks=1 00:13:51.689 00:13:51.689 ' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.689 --rc genhtml_branch_coverage=1 00:13:51.689 --rc genhtml_function_coverage=1 00:13:51.689 --rc genhtml_legend=1 00:13:51.689 --rc geninfo_all_blocks=1 00:13:51.689 --rc geninfo_unexecuted_blocks=1 00:13:51.689 00:13:51.689 ' 00:13:51.689 09:06:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:51.689 09:06:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59639 00:13:51.689 09:06:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:51.689 09:06:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59639 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59639 ']' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:51.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:51.689 09:06:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:51.689 [2024-11-06 09:06:50.601566] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:51.689 [2024-11-06 09:06:50.601703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59639 ] 00:13:51.948 [2024-11-06 09:06:50.784597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.948 [2024-11-06 09:06:50.902855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.885 09:06:51 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.885 09:06:51 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:13:52.885 09:06:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:53.144 { 00:13:53.144 "version": "SPDK v25.01-pre git sha1 cc533a3e5", 00:13:53.144 "fields": { 00:13:53.144 "major": 25, 00:13:53.144 "minor": 1, 00:13:53.144 "patch": 0, 00:13:53.144 "suffix": "-pre", 00:13:53.144 "commit": "cc533a3e5" 00:13:53.144 } 00:13:53.144 } 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:53.144 09:06:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:53.144 09:06:52 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:53.403 request: 00:13:53.403 { 00:13:53.403 "method": "env_dpdk_get_mem_stats", 00:13:53.403 "req_id": 1 00:13:53.403 } 00:13:53.403 Got JSON-RPC error response 00:13:53.403 response: 00:13:53.403 { 00:13:53.403 "code": -32601, 00:13:53.403 "message": "Method not found" 00:13:53.403 } 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.403 09:06:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59639 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59639 ']' 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59639 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59639 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:53.403 killing process with pid 59639 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59639' 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@971 -- # kill 59639 00:13:53.403 09:06:52 app_cmdline -- common/autotest_common.sh@976 -- # wait 59639 00:13:55.936 00:13:55.936 real 0m4.629s 00:13:55.936 user 0m4.879s 00:13:55.936 sys 0m0.677s 00:13:55.936 09:06:54 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:55.936 09:06:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:55.936 ************************************ 00:13:55.936 END TEST app_cmdline 00:13:55.936 ************************************ 00:13:55.936 09:06:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:55.936 09:06:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:55.936 09:06:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:55.936 09:06:54 -- common/autotest_common.sh@10 -- # set +x 00:13:55.936 ************************************ 00:13:55.936 START TEST version 00:13:55.936 ************************************ 00:13:55.936 09:06:54 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:56.195 * Looking for test storage... 00:13:56.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1691 -- # lcov --version 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:56.195 09:06:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.195 09:06:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.195 09:06:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.195 09:06:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.195 09:06:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.195 09:06:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.195 09:06:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.195 09:06:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.195 09:06:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.195 09:06:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.195 09:06:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.195 09:06:55 version -- scripts/common.sh@344 -- # case "$op" in 00:13:56.195 09:06:55 version -- scripts/common.sh@345 -- # : 1 00:13:56.195 09:06:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.195 09:06:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.195 09:06:55 version -- scripts/common.sh@365 -- # decimal 1 00:13:56.195 09:06:55 version -- scripts/common.sh@353 -- # local d=1 00:13:56.195 09:06:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.195 09:06:55 version -- scripts/common.sh@355 -- # echo 1 00:13:56.195 09:06:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.195 09:06:55 version -- scripts/common.sh@366 -- # decimal 2 00:13:56.195 09:06:55 version -- scripts/common.sh@353 -- # local d=2 00:13:56.195 09:06:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.195 09:06:55 version -- scripts/common.sh@355 -- # echo 2 00:13:56.195 09:06:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.195 09:06:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.195 09:06:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.195 09:06:55 version -- scripts/common.sh@368 -- # return 0 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.195 --rc genhtml_branch_coverage=1 00:13:56.195 --rc genhtml_function_coverage=1 00:13:56.195 --rc genhtml_legend=1 00:13:56.195 --rc geninfo_all_blocks=1 00:13:56.195 --rc geninfo_unexecuted_blocks=1 00:13:56.195 00:13:56.195 ' 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.195 --rc genhtml_branch_coverage=1 00:13:56.195 --rc genhtml_function_coverage=1 00:13:56.195 --rc genhtml_legend=1 00:13:56.195 --rc geninfo_all_blocks=1 00:13:56.195 --rc geninfo_unexecuted_blocks=1 00:13:56.195 00:13:56.195 ' 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.195 --rc genhtml_branch_coverage=1 00:13:56.195 --rc genhtml_function_coverage=1 00:13:56.195 --rc genhtml_legend=1 00:13:56.195 --rc geninfo_all_blocks=1 00:13:56.195 --rc geninfo_unexecuted_blocks=1 00:13:56.195 00:13:56.195 ' 00:13:56.195 09:06:55 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.195 --rc genhtml_branch_coverage=1 00:13:56.195 --rc genhtml_function_coverage=1 00:13:56.195 --rc genhtml_legend=1 00:13:56.195 --rc geninfo_all_blocks=1 00:13:56.195 --rc geninfo_unexecuted_blocks=1 00:13:56.195 00:13:56.195 ' 00:13:56.195 09:06:55 version -- app/version.sh@17 -- # get_header_version major 00:13:56.195 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:13:56.195 09:06:55 version -- app/version.sh@17 -- # major=25 00:13:56.195 09:06:55 version -- app/version.sh@18 -- # get_header_version minor 00:13:56.195 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:56.195 09:06:55 version -- app/version.sh@18 -- # minor=1 00:13:56.195 09:06:55 version -- app/version.sh@19 -- # get_header_version patch 00:13:56.195 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:56.195 09:06:55 version -- app/version.sh@19 -- # patch=0 00:13:56.195 09:06:55 version -- app/version.sh@20 -- # get_header_version suffix 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:13:56.195 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:56.195 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:13:56.195 09:06:55 version -- app/version.sh@20 -- # suffix=-pre 00:13:56.195 09:06:55 version -- app/version.sh@22 -- # version=25.1 00:13:56.195 09:06:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:56.195 09:06:55 version -- app/version.sh@28 -- # version=25.1rc0 00:13:56.195 09:06:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:56.195 09:06:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:56.455 09:06:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:13:56.455 09:06:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:13:56.455 ************************************ 00:13:56.455 END TEST version 00:13:56.455 ************************************ 00:13:56.455 00:13:56.455 real 0m0.293s 00:13:56.455 user 0m0.174s 00:13:56.455 sys 0m0.172s 00:13:56.455 09:06:55 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:56.455 09:06:55 version -- common/autotest_common.sh@10 -- # set +x 00:13:56.455 09:06:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:13:56.455 09:06:55 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:13:56.455 09:06:55 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:56.455 09:06:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:56.455 09:06:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:56.455 09:06:55 -- common/autotest_common.sh@10 -- # set +x 00:13:56.455 ************************************ 00:13:56.455 START TEST bdev_raid 00:13:56.455 ************************************ 00:13:56.455 09:06:55 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:56.455 * Looking for test storage... 00:13:56.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:56.455 09:06:55 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:56.455 09:06:55 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:56.455 09:06:55 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:56.715 09:06:55 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@345 -- # : 1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.715 09:06:55 bdev_raid -- scripts/common.sh@368 -- # return 0 00:13:56.715 09:06:55 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.715 09:06:55 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:56.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.715 --rc genhtml_branch_coverage=1 00:13:56.716 --rc genhtml_function_coverage=1 00:13:56.716 --rc genhtml_legend=1 00:13:56.716 --rc geninfo_all_blocks=1 00:13:56.716 --rc geninfo_unexecuted_blocks=1 00:13:56.716 00:13:56.716 ' 00:13:56.716 09:06:55 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:56.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.716 --rc genhtml_branch_coverage=1 00:13:56.716 --rc genhtml_function_coverage=1 00:13:56.716 --rc genhtml_legend=1 00:13:56.716 --rc geninfo_all_blocks=1 00:13:56.716 --rc geninfo_unexecuted_blocks=1 00:13:56.716 00:13:56.716 ' 00:13:56.716 09:06:55 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:56.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.716 --rc genhtml_branch_coverage=1 00:13:56.716 --rc genhtml_function_coverage=1 00:13:56.716 --rc genhtml_legend=1 00:13:56.716 --rc geninfo_all_blocks=1 00:13:56.716 --rc geninfo_unexecuted_blocks=1 00:13:56.716 00:13:56.716 ' 00:13:56.716 09:06:55 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:56.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.716 --rc genhtml_branch_coverage=1 00:13:56.716 --rc genhtml_function_coverage=1 00:13:56.716 --rc genhtml_legend=1 00:13:56.716 --rc geninfo_all_blocks=1 00:13:56.716 --rc geninfo_unexecuted_blocks=1 00:13:56.716 00:13:56.716 ' 00:13:56.716 09:06:55 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:56.716 09:06:55 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:56.716 09:06:55 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:13:56.716 09:06:55 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:13:56.716 09:06:55 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:13:56.716 09:06:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:13:56.716 09:06:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:13:56.716 09:06:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:56.716 09:06:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:56.716 09:06:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.716 ************************************ 00:13:56.716 START TEST raid1_resize_data_offset_test 00:13:56.716 ************************************ 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59832 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59832' 00:13:56.716 Process raid pid: 59832 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59832 00:13:56.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59832 ']' 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.716 09:06:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.716 [2024-11-06 09:06:55.686697] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:13:56.716 [2024-11-06 09:06:55.686823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.975 [2024-11-06 09:06:55.868518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.975 [2024-11-06 09:06:55.994137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.234 [2024-11-06 09:06:56.211478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.234 [2024-11-06 09:06:56.211520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.492 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.493 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:13:57.493 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:13:57.493 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.493 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 malloc0 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 malloc1 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 null0 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 [2024-11-06 09:06:56.697956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:13:57.751 [2024-11-06 09:06:56.700075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:57.751 [2024-11-06 09:06:56.700133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:13:57.751 [2024-11-06 09:06:56.700302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:57.751 [2024-11-06 09:06:56.700321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:13:57.751 [2024-11-06 09:06:56.700608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:57.751 [2024-11-06 09:06:56.700767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:57.751 [2024-11-06 09:06:56.700790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:57.751 [2024-11-06 09:06:56.700929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.751 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.752 [2024-11-06 09:06:56.753885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:13:57.752 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.752 09:06:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:13:57.752 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.752 09:06:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.319 malloc2 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.319 [2024-11-06 09:06:57.325111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:58.319 [2024-11-06 09:06:57.343041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.319 [2024-11-06 09:06:57.345089] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.319 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59832 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59832 ']' 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59832 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59832 00:13:58.579 killing process with pid 59832 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59832' 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59832 00:13:58.579 09:06:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59832 00:13:58.579 [2024-11-06 09:06:57.442020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.579 [2024-11-06 09:06:57.442318] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:13:58.579 [2024-11-06 09:06:57.442368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.579 [2024-11-06 09:06:57.442387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:13:58.579 [2024-11-06 09:06:57.479891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.579 [2024-11-06 09:06:57.480216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.579 [2024-11-06 09:06:57.480236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:00.479 [2024-11-06 09:06:59.305828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.853 09:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:14:01.853 00:14:01.853 real 0m4.863s 00:14:01.853 user 0m4.728s 00:14:01.853 sys 0m0.593s 00:14:01.853 09:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.854 09:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.854 ************************************ 00:14:01.854 END TEST raid1_resize_data_offset_test 00:14:01.854 ************************************ 00:14:01.854 09:07:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:14:01.854 09:07:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:01.854 09:07:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:01.854 09:07:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.854 ************************************ 00:14:01.854 START TEST raid0_resize_superblock_test 00:14:01.854 ************************************ 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59910 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59910' 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:01.854 Process raid pid: 59910 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59910 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59910 ']' 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.854 09:07:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.854 [2024-11-06 09:07:00.624264] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:01.854 [2024-11-06 09:07:00.624407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.854 [2024-11-06 09:07:00.807519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.112 [2024-11-06 09:07:00.928836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.112 [2024-11-06 09:07:01.135995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.112 [2024-11-06 09:07:01.136043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.680 09:07:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.680 09:07:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:02.680 09:07:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:14:02.680 09:07:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.680 09:07:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 malloc0 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 [2024-11-06 09:07:02.059442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:03.247 [2024-11-06 09:07:02.059510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.247 [2024-11-06 09:07:02.059540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:03.247 [2024-11-06 09:07:02.059557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.247 [2024-11-06 09:07:02.062019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.247 [2024-11-06 09:07:02.062063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:03.247 pt0 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 b0574a99-d30a-4cfc-be94-b1a0c3777c0f 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 4a43425e-14b6-40e9-92ff-5132c5439c2f 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 44fd3b1c-9fa1-4993-a1d3-f49254ec9592 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 [2024-11-06 09:07:02.195183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a43425e-14b6-40e9-92ff-5132c5439c2f is claimed 00:14:03.247 [2024-11-06 09:07:02.195268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44fd3b1c-9fa1-4993-a1d3-f49254ec9592 is claimed 00:14:03.247 [2024-11-06 09:07:02.195411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:03.247 [2024-11-06 09:07:02.195429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:14:03.247 [2024-11-06 09:07:02.195696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:03.247 [2024-11-06 09:07:02.195897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:03.247 [2024-11-06 09:07:02.195909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:14:03.247 [2024-11-06 09:07:02.196076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.247 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:14:03.506 [2024-11-06 09:07:02.291218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 [2024-11-06 09:07:02.335126] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:03.506 [2024-11-06 09:07:02.335154] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4a43425e-14b6-40e9-92ff-5132c5439c2f' was resized: old size 131072, new size 204800 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 [2024-11-06 09:07:02.343030] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:03.506 [2024-11-06 09:07:02.343054] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '44fd3b1c-9fa1-4993-a1d3-f49254ec9592' was resized: old size 131072, new size 204800 00:14:03.506 [2024-11-06 09:07:02.343084] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 [2024-11-06 09:07:02.447005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 [2024-11-06 09:07:02.486745] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:14:03.506 [2024-11-06 09:07:02.486816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:14:03.506 [2024-11-06 09:07:02.486839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.506 [2024-11-06 09:07:02.486859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:14:03.506 [2024-11-06 09:07:02.486972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.506 [2024-11-06 09:07:02.487003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.506 [2024-11-06 09:07:02.487017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.506 [2024-11-06 09:07:02.494669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:03.506 [2024-11-06 09:07:02.494725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.506 [2024-11-06 09:07:02.494747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:03.506 [2024-11-06 09:07:02.494762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.506 [2024-11-06 09:07:02.497197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.506 [2024-11-06 09:07:02.497238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:03.506 [2024-11-06 09:07:02.498840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4a43425e-14b6-40e9-92ff-5132c5439c2f 00:14:03.506 [2024-11-06 09:07:02.498921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a43425e-14b6-40e9-92ff-5132c5439c2f is claimed 00:14:03.506 [2024-11-06 09:07:02.499034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 44fd3b1c-9fa1-4993-a1d3-f49254ec9592 00:14:03.506 [2024-11-06 09:07:02.499055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44fd3b1c-9fa1-4993-a1d3-f49254ec9592 is claimed 00:14:03.506 [2024-11-06 09:07:02.499218] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 44fd3b1c-9fa1-4993-a1d3-f49254ec9592 (2) smaller than existing raid bdev Raid (3) 00:14:03.506 [2024-11-06 09:07:02.499245] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4a43425e-14b6-40e9-92ff-5132c5439c2f: File exists 00:14:03.506 [2024-11-06 09:07:02.499297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:03.506 [2024-11-06 09:07:02.499311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:14:03.506 pt0 00:14:03.506 [2024-11-06 09:07:02.499557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:03.506 [2024-11-06 09:07:02.499696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:03.506 [2024-11-06 09:07:02.499705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:14:03.506 [2024-11-06 09:07:02.499843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:03.506 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:03.507 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:14:03.507 [2024-11-06 09:07:02.519099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59910 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59910 ']' 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59910 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59910 00:14:03.765 killing process with pid 59910 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59910' 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59910 00:14:03.765 [2024-11-06 09:07:02.606833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.765 [2024-11-06 09:07:02.606903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.765 [2024-11-06 09:07:02.606946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.765 [2024-11-06 09:07:02.606956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:14:03.765 09:07:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59910 00:14:05.141 [2024-11-06 09:07:04.064873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.518 09:07:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:14:06.518 00:14:06.518 real 0m4.664s 00:14:06.518 user 0m4.884s 00:14:06.518 sys 0m0.643s 00:14:06.518 ************************************ 00:14:06.518 END TEST raid0_resize_superblock_test 00:14:06.518 ************************************ 00:14:06.518 09:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.518 09:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.518 09:07:05 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:14:06.518 09:07:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:06.518 09:07:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:06.518 09:07:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.518 ************************************ 00:14:06.518 START TEST raid1_resize_superblock_test 00:14:06.518 ************************************ 00:14:06.518 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60017 00:14:06.519 Process raid pid: 60017 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60017' 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60017 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60017 ']' 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.519 09:07:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.519 [2024-11-06 09:07:05.372063] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:06.519 [2024-11-06 09:07:05.372233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.778 [2024-11-06 09:07:05.571599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.778 [2024-11-06 09:07:05.709903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.038 [2024-11-06 09:07:05.945166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.038 [2024-11-06 09:07:05.945480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.297 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.297 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:07.297 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:14:07.297 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.297 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.865 malloc0 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.865 [2024-11-06 09:07:06.860225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:07.865 [2024-11-06 09:07:06.860316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.865 [2024-11-06 09:07:06.860347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:07.865 [2024-11-06 09:07:06.860364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.865 [2024-11-06 09:07:06.862973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.865 [2024-11-06 09:07:06.863163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:07.865 pt0 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.865 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 f5f004f5-0a0a-4a69-973c-f69076576133 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 1284e78f-a63d-4b04-a731-24e4a8d8a662 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 4792346f-b6f5-4800-8b03-4b3faef6f3d3 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 [2024-11-06 09:07:06.993833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1284e78f-a63d-4b04-a731-24e4a8d8a662 is claimed 00:14:08.124 [2024-11-06 09:07:06.994097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4792346f-b6f5-4800-8b03-4b3faef6f3d3 is claimed 00:14:08.124 [2024-11-06 09:07:06.994250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:08.124 [2024-11-06 09:07:06.994271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:14:08.124 [2024-11-06 09:07:06.994580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:08.124 [2024-11-06 09:07:06.994774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:08.124 [2024-11-06 09:07:06.994787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:14:08.124 [2024-11-06 09:07:06.994960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.124 09:07:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 [2024-11-06 09:07:07.106149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 [2024-11-06 09:07:07.142044] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:08.124 [2024-11-06 09:07:07.142080] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1284e78f-a63d-4b04-a731-24e4a8d8a662' was resized: old size 131072, new size 204800 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.124 [2024-11-06 09:07:07.153946] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:08.124 [2024-11-06 09:07:07.153976] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4792346f-b6f5-4800-8b03-4b3faef6f3d3' was resized: old size 131072, new size 204800 00:14:08.124 [2024-11-06 09:07:07.154014] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.124 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:14:08.383 [2024-11-06 09:07:07.270054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 [2024-11-06 09:07:07.317824] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:14:08.383 [2024-11-06 09:07:07.317916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:14:08.383 [2024-11-06 09:07:07.317950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:14:08.383 [2024-11-06 09:07:07.318124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.383 [2024-11-06 09:07:07.318378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.383 [2024-11-06 09:07:07.318454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.383 [2024-11-06 09:07:07.318473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.383 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 [2024-11-06 09:07:07.329722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:08.383 [2024-11-06 09:07:07.329789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.383 [2024-11-06 09:07:07.329815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:08.383 [2024-11-06 09:07:07.329834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.383 [2024-11-06 09:07:07.332524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.383 [2024-11-06 09:07:07.332569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:08.383 [2024-11-06 09:07:07.334432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1284e78f-a63d-4b04-a731-24e4a8d8a662 00:14:08.384 [2024-11-06 09:07:07.334508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1284e78f-a63d-4b04-a731-24e4a8d8a662 is claimed 00:14:08.384 [2024-11-06 09:07:07.334629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4792346f-b6f5-4800-8b03-4b3faef6f3d3 00:14:08.384 [2024-11-06 09:07:07.334652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4792346f-b6f5-4800-8b03-4b3faef6f3d3 is claimed 00:14:08.384 [2024-11-06 09:07:07.334810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4792346f-b6f5-4800-8b03-4b3faef6f3d3 (2) smaller than existing raid bdev Raid (3) 00:14:08.384 [2024-11-06 09:07:07.334836] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 1284e78f-a63d-4b04-a731-24e4a8d8a662: File exists 00:14:08.384 [2024-11-06 09:07:07.334879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:08.384 [2024-11-06 09:07:07.334893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:08.384 [2024-11-06 09:07:07.335161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:08.384 pt0 00:14:08.384 [2024-11-06 09:07:07.335379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:08.384 [2024-11-06 09:07:07.335393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:14:08.384 [2024-11-06 09:07:07.335565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:14:08.384 [2024-11-06 09:07:07.354745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60017 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60017 ']' 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60017 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.384 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60017 00:14:08.643 killing process with pid 60017 00:14:08.643 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.643 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.643 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60017' 00:14:08.643 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60017 00:14:08.643 [2024-11-06 09:07:07.429083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.643 [2024-11-06 09:07:07.429177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.643 09:07:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60017 00:14:08.643 [2024-11-06 09:07:07.429239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.643 [2024-11-06 09:07:07.429251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:14:10.020 [2024-11-06 09:07:08.867866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.955 09:07:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:14:10.955 00:14:10.955 real 0m4.729s 00:14:10.955 user 0m4.974s 00:14:10.955 sys 0m0.641s 00:14:10.955 09:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.955 ************************************ 00:14:10.955 END TEST raid1_resize_superblock_test 00:14:10.955 ************************************ 00:14:10.955 09:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.214 09:07:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:14:11.214 09:07:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:14:11.214 09:07:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:14:11.214 09:07:10 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:14:11.214 09:07:10 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:14:11.214 09:07:10 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:11.214 09:07:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:11.214 09:07:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:11.214 09:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.214 ************************************ 00:14:11.214 START TEST raid_function_test_raid0 00:14:11.214 ************************************ 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:14:11.214 Process raid pid: 60120 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60120 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60120' 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60120 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60120 ']' 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.214 09:07:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:11.214 [2024-11-06 09:07:10.199196] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:11.214 [2024-11-06 09:07:10.199357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.473 [2024-11-06 09:07:10.393691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.732 [2024-11-06 09:07:10.511767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.732 [2024-11-06 09:07:10.721744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.732 [2024-11-06 09:07:10.721800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 Base_1 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 Base_2 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 [2024-11-06 09:07:11.181288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:12.301 [2024-11-06 09:07:11.183493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:12.301 [2024-11-06 09:07:11.183565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.301 [2024-11-06 09:07:11.183579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:12.301 [2024-11-06 09:07:11.183844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:12.301 [2024-11-06 09:07:11.183977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.301 [2024-11-06 09:07:11.183987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:14:12.301 [2024-11-06 09:07:11.184132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.301 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:14:12.560 [2024-11-06 09:07:11.464939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:12.560 /dev/nbd0 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.560 1+0 records in 00:14:12.560 1+0 records out 00:14:12.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290433 s, 14.1 MB/s 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.560 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:14:12.561 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.561 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:12.819 { 00:14:12.819 "nbd_device": "/dev/nbd0", 00:14:12.819 "bdev_name": "raid" 00:14:12.819 } 00:14:12.819 ]' 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:12.819 { 00:14:12.819 "nbd_device": "/dev/nbd0", 00:14:12.819 "bdev_name": "raid" 00:14:12.819 } 00:14:12.819 ]' 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:14:12.819 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:13.076 4096+0 records in 00:14:13.076 4096+0 records out 00:14:13.076 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0389723 s, 53.8 MB/s 00:14:13.076 09:07:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:13.335 4096+0 records in 00:14:13.335 4096+0 records out 00:14:13.335 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.266034 s, 7.9 MB/s 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:13.335 128+0 records in 00:14:13.335 128+0 records out 00:14:13.335 65536 bytes (66 kB, 64 KiB) copied, 0.000673793 s, 97.3 MB/s 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:13.335 2035+0 records in 00:14:13.335 2035+0 records out 00:14:13.335 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0207579 s, 50.2 MB/s 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:13.335 456+0 records in 00:14:13.335 456+0 records out 00:14:13.335 233472 bytes (233 kB, 228 KiB) copied, 0.00613566 s, 38.1 MB/s 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.335 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.594 [2024-11-06 09:07:12.520909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.594 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60120 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60120 ']' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60120 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60120 00:14:13.853 killing process with pid 60120 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60120' 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60120 00:14:13.853 [2024-11-06 09:07:12.847065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.853 [2024-11-06 09:07:12.847168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.853 09:07:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60120 00:14:13.853 [2024-11-06 09:07:12.847216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.853 [2024-11-06 09:07:12.847235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:14:14.112 [2024-11-06 09:07:13.059846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.491 ************************************ 00:14:15.491 END TEST raid_function_test_raid0 00:14:15.491 ************************************ 00:14:15.491 09:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:14:15.491 00:14:15.491 real 0m4.099s 00:14:15.491 user 0m4.751s 00:14:15.491 sys 0m1.112s 00:14:15.491 09:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:15.491 09:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:15.491 09:07:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:14:15.491 09:07:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:15.491 09:07:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.491 09:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.491 ************************************ 00:14:15.491 START TEST raid_function_test_concat 00:14:15.491 ************************************ 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60249 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60249' 00:14:15.491 Process raid pid: 60249 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60249 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60249 ']' 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:15.491 09:07:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:15.491 [2024-11-06 09:07:14.364183] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:15.491 [2024-11-06 09:07:14.364514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.749 [2024-11-06 09:07:14.545523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.749 [2024-11-06 09:07:14.669383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.007 [2024-11-06 09:07:14.887702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.007 [2024-11-06 09:07:14.887745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:16.265 Base_1 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:16.265 Base_2 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:16.265 [2024-11-06 09:07:15.295343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:16.265 [2024-11-06 09:07:15.297513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:16.265 [2024-11-06 09:07:15.297831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:16.265 [2024-11-06 09:07:15.297857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:16.265 [2024-11-06 09:07:15.298175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:16.265 [2024-11-06 09:07:15.298351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:16.265 [2024-11-06 09:07:15.298363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:14:16.265 [2024-11-06 09:07:15.298552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.265 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.523 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:14:16.782 [2024-11-06 09:07:15.562972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.782 /dev/nbd0 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.782 1+0 records in 00:14:16.782 1+0 records out 00:14:16.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346452 s, 11.8 MB/s 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.782 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:17.040 { 00:14:17.040 "nbd_device": "/dev/nbd0", 00:14:17.040 "bdev_name": "raid" 00:14:17.040 } 00:14:17.040 ]' 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:17.040 { 00:14:17.040 "nbd_device": "/dev/nbd0", 00:14:17.040 "bdev_name": "raid" 00:14:17.040 } 00:14:17.040 ]' 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:14:17.040 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:17.040 4096+0 records in 00:14:17.040 4096+0 records out 00:14:17.040 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0403583 s, 52.0 MB/s 00:14:17.041 09:07:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:17.300 4096+0 records in 00:14:17.300 4096+0 records out 00:14:17.300 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.232825 s, 9.0 MB/s 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:17.300 128+0 records in 00:14:17.300 128+0 records out 00:14:17.300 65536 bytes (66 kB, 64 KiB) copied, 0.00127821 s, 51.3 MB/s 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:17.300 2035+0 records in 00:14:17.300 2035+0 records out 00:14:17.300 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0197513 s, 52.8 MB/s 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:17.300 456+0 records in 00:14:17.300 456+0 records out 00:14:17.300 233472 bytes (233 kB, 228 KiB) copied, 0.00252765 s, 92.4 MB/s 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.300 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.560 [2024-11-06 09:07:16.524333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.560 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60249 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60249 ']' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60249 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60249 00:14:17.818 killing process with pid 60249 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60249' 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60249 00:14:17.818 [2024-11-06 09:07:16.840849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.818 09:07:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60249 00:14:17.818 [2024-11-06 09:07:16.840952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.819 [2024-11-06 09:07:16.841006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.819 [2024-11-06 09:07:16.841021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:14:18.075 [2024-11-06 09:07:17.051945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.452 09:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:14:19.452 00:14:19.452 real 0m3.948s 00:14:19.452 user 0m4.457s 00:14:19.452 sys 0m1.086s 00:14:19.452 09:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:19.452 09:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:19.452 ************************************ 00:14:19.452 END TEST raid_function_test_concat 00:14:19.452 ************************************ 00:14:19.452 09:07:18 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:14:19.452 09:07:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:19.452 09:07:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:19.452 09:07:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.452 ************************************ 00:14:19.452 START TEST raid0_resize_test 00:14:19.452 ************************************ 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:14:19.452 Process raid pid: 60377 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60377 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60377' 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60377 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60377 ']' 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.452 09:07:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.452 [2024-11-06 09:07:18.389601] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:19.452 [2024-11-06 09:07:18.389980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.711 [2024-11-06 09:07:18.589268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.711 [2024-11-06 09:07:18.714466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.970 [2024-11-06 09:07:18.944703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.970 [2024-11-06 09:07:18.944750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.538 Base_1 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.538 Base_2 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.538 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.538 [2024-11-06 09:07:19.299515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:20.538 [2024-11-06 09:07:19.301685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:20.538 [2024-11-06 09:07:19.301911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:20.538 [2024-11-06 09:07:19.301936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:20.538 [2024-11-06 09:07:19.302195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:20.538 [2024-11-06 09:07:19.302339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:20.539 [2024-11-06 09:07:19.302350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:14:20.539 [2024-11-06 09:07:19.302495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 [2024-11-06 09:07:19.307461] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:20.539 [2024-11-06 09:07:19.307492] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:20.539 true 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 [2024-11-06 09:07:19.319610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 [2024-11-06 09:07:19.359385] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:20.539 [2024-11-06 09:07:19.359550] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:20.539 [2024-11-06 09:07:19.359596] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:20.539 true 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:14:20.539 [2024-11-06 09:07:19.371547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60377 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60377 ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60377 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60377 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60377' 00:14:20.539 killing process with pid 60377 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60377 00:14:20.539 [2024-11-06 09:07:19.465189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.539 [2024-11-06 09:07:19.465477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.539 09:07:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60377 00:14:20.539 [2024-11-06 09:07:19.465734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.539 [2024-11-06 09:07:19.465949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:20.539 [2024-11-06 09:07:19.484031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.928 ************************************ 00:14:21.928 END TEST raid0_resize_test 00:14:21.928 ************************************ 00:14:21.928 09:07:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:14:21.928 00:14:21.928 real 0m2.333s 00:14:21.928 user 0m2.462s 00:14:21.928 sys 0m0.424s 00:14:21.928 09:07:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.928 09:07:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.928 09:07:20 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:14:21.928 09:07:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:21.928 09:07:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.928 09:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.928 ************************************ 00:14:21.929 START TEST raid1_resize_test 00:14:21.929 ************************************ 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60433 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:21.929 Process raid pid: 60433 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60433' 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60433 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60433 ']' 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.929 09:07:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.929 [2024-11-06 09:07:20.788583] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:21.929 [2024-11-06 09:07:20.788953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.187 [2024-11-06 09:07:20.971715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.187 [2024-11-06 09:07:21.099392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.446 [2024-11-06 09:07:21.323650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.446 [2024-11-06 09:07:21.323881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.705 Base_1 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.705 Base_2 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.705 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.705 [2024-11-06 09:07:21.744068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:22.965 [2024-11-06 09:07:21.746141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:22.965 [2024-11-06 09:07:21.746363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:22.965 [2024-11-06 09:07:21.746389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:22.965 [2024-11-06 09:07:21.746653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:22.965 [2024-11-06 09:07:21.746781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:22.965 [2024-11-06 09:07:21.746791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:14:22.965 [2024-11-06 09:07:21.746940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.965 [2024-11-06 09:07:21.756027] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:22.965 [2024-11-06 09:07:21.756060] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:22.965 true 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:14:22.965 [2024-11-06 09:07:21.768175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.965 [2024-11-06 09:07:21.811943] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:22.965 [2024-11-06 09:07:21.811970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:22.965 [2024-11-06 09:07:21.812006] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:14:22.965 true 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.965 [2024-11-06 09:07:21.828104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60433 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60433 ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60433 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60433 00:14:22.965 killing process with pid 60433 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60433' 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60433 00:14:22.965 [2024-11-06 09:07:21.908835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.965 09:07:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60433 00:14:22.965 [2024-11-06 09:07:21.908961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.965 [2024-11-06 09:07:21.909671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.965 [2024-11-06 09:07:21.909868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:22.965 [2024-11-06 09:07:21.928666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.368 09:07:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:14:24.368 00:14:24.368 real 0m2.365s 00:14:24.368 user 0m2.555s 00:14:24.368 sys 0m0.392s 00:14:24.368 ************************************ 00:14:24.368 END TEST raid1_resize_test 00:14:24.368 ************************************ 00:14:24.368 09:07:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.368 09:07:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.368 09:07:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:24.368 09:07:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:24.368 09:07:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:24.368 09:07:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:24.368 09:07:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:24.368 09:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.368 ************************************ 00:14:24.368 START TEST raid_state_function_test 00:14:24.368 ************************************ 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:24.368 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:24.368 Process raid pid: 60495 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60495 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60495' 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60495 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60495 ']' 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.369 09:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.369 [2024-11-06 09:07:23.242169] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:24.369 [2024-11-06 09:07:23.242725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.628 [2024-11-06 09:07:23.446893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.628 [2024-11-06 09:07:23.573517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.886 [2024-11-06 09:07:23.790761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.886 [2024-11-06 09:07:23.790811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.145 [2024-11-06 09:07:24.076453] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.145 [2024-11-06 09:07:24.076515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.145 [2024-11-06 09:07:24.076528] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.145 [2024-11-06 09:07:24.076542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.145 "name": "Existed_Raid", 00:14:25.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.145 "strip_size_kb": 64, 00:14:25.145 "state": "configuring", 00:14:25.145 "raid_level": "raid0", 00:14:25.145 "superblock": false, 00:14:25.145 "num_base_bdevs": 2, 00:14:25.145 "num_base_bdevs_discovered": 0, 00:14:25.145 "num_base_bdevs_operational": 2, 00:14:25.145 "base_bdevs_list": [ 00:14:25.145 { 00:14:25.145 "name": "BaseBdev1", 00:14:25.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.145 "is_configured": false, 00:14:25.145 "data_offset": 0, 00:14:25.145 "data_size": 0 00:14:25.145 }, 00:14:25.145 { 00:14:25.145 "name": "BaseBdev2", 00:14:25.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.145 "is_configured": false, 00:14:25.145 "data_offset": 0, 00:14:25.145 "data_size": 0 00:14:25.145 } 00:14:25.145 ] 00:14:25.145 }' 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.145 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 [2024-11-06 09:07:24.471911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.715 [2024-11-06 09:07:24.472085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 [2024-11-06 09:07:24.479877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.715 [2024-11-06 09:07:24.479927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.715 [2024-11-06 09:07:24.479939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.715 [2024-11-06 09:07:24.479955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 [2024-11-06 09:07:24.526897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.715 BaseBdev1 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 [ 00:14:25.715 { 00:14:25.715 "name": "BaseBdev1", 00:14:25.715 "aliases": [ 00:14:25.715 "289e6920-57e4-4798-84fa-f6475a9dc034" 00:14:25.715 ], 00:14:25.715 "product_name": "Malloc disk", 00:14:25.715 "block_size": 512, 00:14:25.715 "num_blocks": 65536, 00:14:25.715 "uuid": "289e6920-57e4-4798-84fa-f6475a9dc034", 00:14:25.715 "assigned_rate_limits": { 00:14:25.715 "rw_ios_per_sec": 0, 00:14:25.715 "rw_mbytes_per_sec": 0, 00:14:25.715 "r_mbytes_per_sec": 0, 00:14:25.715 "w_mbytes_per_sec": 0 00:14:25.715 }, 00:14:25.715 "claimed": true, 00:14:25.715 "claim_type": "exclusive_write", 00:14:25.715 "zoned": false, 00:14:25.715 "supported_io_types": { 00:14:25.715 "read": true, 00:14:25.715 "write": true, 00:14:25.715 "unmap": true, 00:14:25.715 "flush": true, 00:14:25.715 "reset": true, 00:14:25.715 "nvme_admin": false, 00:14:25.715 "nvme_io": false, 00:14:25.715 "nvme_io_md": false, 00:14:25.715 "write_zeroes": true, 00:14:25.715 "zcopy": true, 00:14:25.715 "get_zone_info": false, 00:14:25.715 "zone_management": false, 00:14:25.715 "zone_append": false, 00:14:25.715 "compare": false, 00:14:25.715 "compare_and_write": false, 00:14:25.715 "abort": true, 00:14:25.715 "seek_hole": false, 00:14:25.715 "seek_data": false, 00:14:25.715 "copy": true, 00:14:25.715 "nvme_iov_md": false 00:14:25.715 }, 00:14:25.715 "memory_domains": [ 00:14:25.715 { 00:14:25.715 "dma_device_id": "system", 00:14:25.715 "dma_device_type": 1 00:14:25.715 }, 00:14:25.715 { 00:14:25.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.715 "dma_device_type": 2 00:14:25.715 } 00:14:25.715 ], 00:14:25.715 "driver_specific": {} 00:14:25.715 } 00:14:25.715 ] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.715 "name": "Existed_Raid", 00:14:25.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.715 "strip_size_kb": 64, 00:14:25.715 "state": "configuring", 00:14:25.715 "raid_level": "raid0", 00:14:25.716 "superblock": false, 00:14:25.716 "num_base_bdevs": 2, 00:14:25.716 "num_base_bdevs_discovered": 1, 00:14:25.716 "num_base_bdevs_operational": 2, 00:14:25.716 "base_bdevs_list": [ 00:14:25.716 { 00:14:25.716 "name": "BaseBdev1", 00:14:25.716 "uuid": "289e6920-57e4-4798-84fa-f6475a9dc034", 00:14:25.716 "is_configured": true, 00:14:25.716 "data_offset": 0, 00:14:25.716 "data_size": 65536 00:14:25.716 }, 00:14:25.716 { 00:14:25.716 "name": "BaseBdev2", 00:14:25.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.716 "is_configured": false, 00:14:25.716 "data_offset": 0, 00:14:25.716 "data_size": 0 00:14:25.716 } 00:14:25.716 ] 00:14:25.716 }' 00:14:25.716 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.716 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.975 [2024-11-06 09:07:24.978367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.975 [2024-11-06 09:07:24.978425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.975 [2024-11-06 09:07:24.986464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.975 [2024-11-06 09:07:24.989413] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.975 [2024-11-06 09:07:24.989477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.975 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.976 09:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.235 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.235 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.235 "name": "Existed_Raid", 00:14:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.235 "strip_size_kb": 64, 00:14:26.235 "state": "configuring", 00:14:26.235 "raid_level": "raid0", 00:14:26.235 "superblock": false, 00:14:26.235 "num_base_bdevs": 2, 00:14:26.235 "num_base_bdevs_discovered": 1, 00:14:26.235 "num_base_bdevs_operational": 2, 00:14:26.235 "base_bdevs_list": [ 00:14:26.235 { 00:14:26.235 "name": "BaseBdev1", 00:14:26.235 "uuid": "289e6920-57e4-4798-84fa-f6475a9dc034", 00:14:26.235 "is_configured": true, 00:14:26.235 "data_offset": 0, 00:14:26.235 "data_size": 65536 00:14:26.235 }, 00:14:26.235 { 00:14:26.235 "name": "BaseBdev2", 00:14:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.235 "is_configured": false, 00:14:26.235 "data_offset": 0, 00:14:26.235 "data_size": 0 00:14:26.235 } 00:14:26.235 ] 00:14:26.235 }' 00:14:26.235 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.235 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.495 [2024-11-06 09:07:25.452428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.495 [2024-11-06 09:07:25.452625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:26.495 [2024-11-06 09:07:25.452671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:26.495 [2024-11-06 09:07:25.453104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:26.495 [2024-11-06 09:07:25.453437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:26.495 [2024-11-06 09:07:25.453465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:26.495 [2024-11-06 09:07:25.453783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.495 BaseBdev2 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.495 [ 00:14:26.495 { 00:14:26.495 "name": "BaseBdev2", 00:14:26.495 "aliases": [ 00:14:26.495 "11ab8e60-488d-438e-8913-2de51fbd03a7" 00:14:26.495 ], 00:14:26.495 "product_name": "Malloc disk", 00:14:26.495 "block_size": 512, 00:14:26.495 "num_blocks": 65536, 00:14:26.495 "uuid": "11ab8e60-488d-438e-8913-2de51fbd03a7", 00:14:26.495 "assigned_rate_limits": { 00:14:26.495 "rw_ios_per_sec": 0, 00:14:26.495 "rw_mbytes_per_sec": 0, 00:14:26.495 "r_mbytes_per_sec": 0, 00:14:26.495 "w_mbytes_per_sec": 0 00:14:26.495 }, 00:14:26.495 "claimed": true, 00:14:26.495 "claim_type": "exclusive_write", 00:14:26.495 "zoned": false, 00:14:26.495 "supported_io_types": { 00:14:26.495 "read": true, 00:14:26.495 "write": true, 00:14:26.495 "unmap": true, 00:14:26.495 "flush": true, 00:14:26.495 "reset": true, 00:14:26.495 "nvme_admin": false, 00:14:26.495 "nvme_io": false, 00:14:26.495 "nvme_io_md": false, 00:14:26.495 "write_zeroes": true, 00:14:26.495 "zcopy": true, 00:14:26.495 "get_zone_info": false, 00:14:26.495 "zone_management": false, 00:14:26.495 "zone_append": false, 00:14:26.495 "compare": false, 00:14:26.495 "compare_and_write": false, 00:14:26.495 "abort": true, 00:14:26.495 "seek_hole": false, 00:14:26.495 "seek_data": false, 00:14:26.495 "copy": true, 00:14:26.495 "nvme_iov_md": false 00:14:26.495 }, 00:14:26.495 "memory_domains": [ 00:14:26.495 { 00:14:26.495 "dma_device_id": "system", 00:14:26.495 "dma_device_type": 1 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.495 "dma_device_type": 2 00:14:26.495 } 00:14:26.495 ], 00:14:26.495 "driver_specific": {} 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.495 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.496 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.496 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.496 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.496 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.496 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.755 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.755 "name": "Existed_Raid", 00:14:26.755 "uuid": "a130d516-674d-4fa8-b17d-5f7b826325b5", 00:14:26.755 "strip_size_kb": 64, 00:14:26.755 "state": "online", 00:14:26.755 "raid_level": "raid0", 00:14:26.755 "superblock": false, 00:14:26.755 "num_base_bdevs": 2, 00:14:26.755 "num_base_bdevs_discovered": 2, 00:14:26.755 "num_base_bdevs_operational": 2, 00:14:26.755 "base_bdevs_list": [ 00:14:26.755 { 00:14:26.755 "name": "BaseBdev1", 00:14:26.755 "uuid": "289e6920-57e4-4798-84fa-f6475a9dc034", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev2", 00:14:26.755 "uuid": "11ab8e60-488d-438e-8913-2de51fbd03a7", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 }' 00:14:26.755 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.755 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 [2024-11-06 09:07:25.896616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.014 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.014 "name": "Existed_Raid", 00:14:27.014 "aliases": [ 00:14:27.014 "a130d516-674d-4fa8-b17d-5f7b826325b5" 00:14:27.014 ], 00:14:27.014 "product_name": "Raid Volume", 00:14:27.014 "block_size": 512, 00:14:27.014 "num_blocks": 131072, 00:14:27.014 "uuid": "a130d516-674d-4fa8-b17d-5f7b826325b5", 00:14:27.014 "assigned_rate_limits": { 00:14:27.014 "rw_ios_per_sec": 0, 00:14:27.014 "rw_mbytes_per_sec": 0, 00:14:27.014 "r_mbytes_per_sec": 0, 00:14:27.014 "w_mbytes_per_sec": 0 00:14:27.014 }, 00:14:27.014 "claimed": false, 00:14:27.014 "zoned": false, 00:14:27.014 "supported_io_types": { 00:14:27.014 "read": true, 00:14:27.014 "write": true, 00:14:27.014 "unmap": true, 00:14:27.014 "flush": true, 00:14:27.014 "reset": true, 00:14:27.014 "nvme_admin": false, 00:14:27.015 "nvme_io": false, 00:14:27.015 "nvme_io_md": false, 00:14:27.015 "write_zeroes": true, 00:14:27.015 "zcopy": false, 00:14:27.015 "get_zone_info": false, 00:14:27.015 "zone_management": false, 00:14:27.015 "zone_append": false, 00:14:27.015 "compare": false, 00:14:27.015 "compare_and_write": false, 00:14:27.015 "abort": false, 00:14:27.015 "seek_hole": false, 00:14:27.015 "seek_data": false, 00:14:27.015 "copy": false, 00:14:27.015 "nvme_iov_md": false 00:14:27.015 }, 00:14:27.015 "memory_domains": [ 00:14:27.015 { 00:14:27.015 "dma_device_id": "system", 00:14:27.015 "dma_device_type": 1 00:14:27.015 }, 00:14:27.015 { 00:14:27.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.015 "dma_device_type": 2 00:14:27.015 }, 00:14:27.015 { 00:14:27.015 "dma_device_id": "system", 00:14:27.015 "dma_device_type": 1 00:14:27.015 }, 00:14:27.015 { 00:14:27.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.015 "dma_device_type": 2 00:14:27.015 } 00:14:27.015 ], 00:14:27.015 "driver_specific": { 00:14:27.015 "raid": { 00:14:27.015 "uuid": "a130d516-674d-4fa8-b17d-5f7b826325b5", 00:14:27.015 "strip_size_kb": 64, 00:14:27.015 "state": "online", 00:14:27.015 "raid_level": "raid0", 00:14:27.015 "superblock": false, 00:14:27.015 "num_base_bdevs": 2, 00:14:27.015 "num_base_bdevs_discovered": 2, 00:14:27.015 "num_base_bdevs_operational": 2, 00:14:27.015 "base_bdevs_list": [ 00:14:27.015 { 00:14:27.015 "name": "BaseBdev1", 00:14:27.015 "uuid": "289e6920-57e4-4798-84fa-f6475a9dc034", 00:14:27.015 "is_configured": true, 00:14:27.015 "data_offset": 0, 00:14:27.015 "data_size": 65536 00:14:27.015 }, 00:14:27.015 { 00:14:27.015 "name": "BaseBdev2", 00:14:27.015 "uuid": "11ab8e60-488d-438e-8913-2de51fbd03a7", 00:14:27.015 "is_configured": true, 00:14:27.015 "data_offset": 0, 00:14:27.015 "data_size": 65536 00:14:27.015 } 00:14:27.015 ] 00:14:27.015 } 00:14:27.015 } 00:14:27.015 }' 00:14:27.015 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.015 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:27.015 BaseBdev2' 00:14:27.015 09:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.015 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.274 [2024-11-06 09:07:26.104087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.274 [2024-11-06 09:07:26.104124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.274 [2024-11-06 09:07:26.104176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.274 "name": "Existed_Raid", 00:14:27.274 "uuid": "a130d516-674d-4fa8-b17d-5f7b826325b5", 00:14:27.274 "strip_size_kb": 64, 00:14:27.274 "state": "offline", 00:14:27.274 "raid_level": "raid0", 00:14:27.274 "superblock": false, 00:14:27.274 "num_base_bdevs": 2, 00:14:27.274 "num_base_bdevs_discovered": 1, 00:14:27.274 "num_base_bdevs_operational": 1, 00:14:27.274 "base_bdevs_list": [ 00:14:27.274 { 00:14:27.274 "name": null, 00:14:27.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.274 "is_configured": false, 00:14:27.274 "data_offset": 0, 00:14:27.274 "data_size": 65536 00:14:27.274 }, 00:14:27.274 { 00:14:27.274 "name": "BaseBdev2", 00:14:27.274 "uuid": "11ab8e60-488d-438e-8913-2de51fbd03a7", 00:14:27.274 "is_configured": true, 00:14:27.274 "data_offset": 0, 00:14:27.274 "data_size": 65536 00:14:27.274 } 00:14:27.274 ] 00:14:27.274 }' 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.274 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.841 [2024-11-06 09:07:26.652447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.841 [2024-11-06 09:07:26.652508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60495 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60495 ']' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60495 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60495 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:27.841 killing process with pid 60495 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60495' 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60495 00:14:27.841 [2024-11-06 09:07:26.844120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.841 09:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60495 00:14:27.841 [2024-11-06 09:07:26.861076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.372 09:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:29.372 00:14:29.372 real 0m4.857s 00:14:29.372 user 0m6.870s 00:14:29.372 sys 0m0.941s 00:14:29.372 ************************************ 00:14:29.372 END TEST raid_state_function_test 00:14:29.372 09:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:29.372 09:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 ************************************ 00:14:29.372 09:07:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:29.372 09:07:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:29.372 09:07:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:29.372 09:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 ************************************ 00:14:29.372 START TEST raid_state_function_test_sb 00:14:29.372 ************************************ 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:29.372 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60743 00:14:29.373 Process raid pid: 60743 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60743' 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60743 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60743 ']' 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:29.373 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.373 [2024-11-06 09:07:28.162466] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:29.373 [2024-11-06 09:07:28.163113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.373 [2024-11-06 09:07:28.347643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.631 [2024-11-06 09:07:28.469563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.891 [2024-11-06 09:07:28.683213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.891 [2024-11-06 09:07:28.683261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.151 [2024-11-06 09:07:28.992898] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.151 [2024-11-06 09:07:28.992957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.151 [2024-11-06 09:07:28.992969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.151 [2024-11-06 09:07:28.992982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.151 09:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.151 "name": "Existed_Raid", 00:14:30.151 "uuid": "c2232251-4c96-43c8-be9a-44d003b489ff", 00:14:30.151 "strip_size_kb": 64, 00:14:30.151 "state": "configuring", 00:14:30.151 "raid_level": "raid0", 00:14:30.151 "superblock": true, 00:14:30.151 "num_base_bdevs": 2, 00:14:30.151 "num_base_bdevs_discovered": 0, 00:14:30.151 "num_base_bdevs_operational": 2, 00:14:30.151 "base_bdevs_list": [ 00:14:30.151 { 00:14:30.151 "name": "BaseBdev1", 00:14:30.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.151 "is_configured": false, 00:14:30.151 "data_offset": 0, 00:14:30.151 "data_size": 0 00:14:30.151 }, 00:14:30.151 { 00:14:30.151 "name": "BaseBdev2", 00:14:30.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.151 "is_configured": false, 00:14:30.151 "data_offset": 0, 00:14:30.151 "data_size": 0 00:14:30.151 } 00:14:30.151 ] 00:14:30.151 }' 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.151 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.410 [2024-11-06 09:07:29.388442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.410 [2024-11-06 09:07:29.388491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.410 [2024-11-06 09:07:29.400443] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.410 [2024-11-06 09:07:29.400491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.410 [2024-11-06 09:07:29.400502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.410 [2024-11-06 09:07:29.400517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.410 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.410 [2024-11-06 09:07:29.448086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.410 BaseBdev1 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.669 [ 00:14:30.669 { 00:14:30.669 "name": "BaseBdev1", 00:14:30.669 "aliases": [ 00:14:30.669 "97eb2c1f-90ea-46bf-bcdc-a0b2c44b7997" 00:14:30.669 ], 00:14:30.669 "product_name": "Malloc disk", 00:14:30.669 "block_size": 512, 00:14:30.669 "num_blocks": 65536, 00:14:30.669 "uuid": "97eb2c1f-90ea-46bf-bcdc-a0b2c44b7997", 00:14:30.669 "assigned_rate_limits": { 00:14:30.669 "rw_ios_per_sec": 0, 00:14:30.669 "rw_mbytes_per_sec": 0, 00:14:30.669 "r_mbytes_per_sec": 0, 00:14:30.669 "w_mbytes_per_sec": 0 00:14:30.669 }, 00:14:30.669 "claimed": true, 00:14:30.669 "claim_type": "exclusive_write", 00:14:30.669 "zoned": false, 00:14:30.669 "supported_io_types": { 00:14:30.669 "read": true, 00:14:30.669 "write": true, 00:14:30.669 "unmap": true, 00:14:30.669 "flush": true, 00:14:30.669 "reset": true, 00:14:30.669 "nvme_admin": false, 00:14:30.669 "nvme_io": false, 00:14:30.669 "nvme_io_md": false, 00:14:30.669 "write_zeroes": true, 00:14:30.669 "zcopy": true, 00:14:30.669 "get_zone_info": false, 00:14:30.669 "zone_management": false, 00:14:30.669 "zone_append": false, 00:14:30.669 "compare": false, 00:14:30.669 "compare_and_write": false, 00:14:30.669 "abort": true, 00:14:30.669 "seek_hole": false, 00:14:30.669 "seek_data": false, 00:14:30.669 "copy": true, 00:14:30.669 "nvme_iov_md": false 00:14:30.669 }, 00:14:30.669 "memory_domains": [ 00:14:30.669 { 00:14:30.669 "dma_device_id": "system", 00:14:30.669 "dma_device_type": 1 00:14:30.669 }, 00:14:30.669 { 00:14:30.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.669 "dma_device_type": 2 00:14:30.669 } 00:14:30.669 ], 00:14:30.669 "driver_specific": {} 00:14:30.669 } 00:14:30.669 ] 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.669 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.670 "name": "Existed_Raid", 00:14:30.670 "uuid": "d9609d10-a4be-402e-99f6-3e7ea0e1ed4e", 00:14:30.670 "strip_size_kb": 64, 00:14:30.670 "state": "configuring", 00:14:30.670 "raid_level": "raid0", 00:14:30.670 "superblock": true, 00:14:30.670 "num_base_bdevs": 2, 00:14:30.670 "num_base_bdevs_discovered": 1, 00:14:30.670 "num_base_bdevs_operational": 2, 00:14:30.670 "base_bdevs_list": [ 00:14:30.670 { 00:14:30.670 "name": "BaseBdev1", 00:14:30.670 "uuid": "97eb2c1f-90ea-46bf-bcdc-a0b2c44b7997", 00:14:30.670 "is_configured": true, 00:14:30.670 "data_offset": 2048, 00:14:30.670 "data_size": 63488 00:14:30.670 }, 00:14:30.670 { 00:14:30.670 "name": "BaseBdev2", 00:14:30.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.670 "is_configured": false, 00:14:30.670 "data_offset": 0, 00:14:30.670 "data_size": 0 00:14:30.670 } 00:14:30.670 ] 00:14:30.670 }' 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.670 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 [2024-11-06 09:07:29.879538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.928 [2024-11-06 09:07:29.879598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 [2024-11-06 09:07:29.891572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.928 [2024-11-06 09:07:29.893650] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.928 [2024-11-06 09:07:29.893707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.928 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.928 "name": "Existed_Raid", 00:14:30.928 "uuid": "1708cb41-0a1d-4b11-b76d-644742ea64dc", 00:14:30.928 "strip_size_kb": 64, 00:14:30.928 "state": "configuring", 00:14:30.928 "raid_level": "raid0", 00:14:30.928 "superblock": true, 00:14:30.928 "num_base_bdevs": 2, 00:14:30.928 "num_base_bdevs_discovered": 1, 00:14:30.928 "num_base_bdevs_operational": 2, 00:14:30.928 "base_bdevs_list": [ 00:14:30.928 { 00:14:30.928 "name": "BaseBdev1", 00:14:30.928 "uuid": "97eb2c1f-90ea-46bf-bcdc-a0b2c44b7997", 00:14:30.928 "is_configured": true, 00:14:30.928 "data_offset": 2048, 00:14:30.928 "data_size": 63488 00:14:30.928 }, 00:14:30.928 { 00:14:30.928 "name": "BaseBdev2", 00:14:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.928 "is_configured": false, 00:14:30.928 "data_offset": 0, 00:14:30.928 "data_size": 0 00:14:30.928 } 00:14:30.928 ] 00:14:30.928 }' 00:14:30.929 09:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.929 09:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.496 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.496 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.496 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.496 [2024-11-06 09:07:30.295366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.496 [2024-11-06 09:07:30.295617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:31.496 [2024-11-06 09:07:30.295634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.496 [2024-11-06 09:07:30.295909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:31.497 [2024-11-06 09:07:30.296058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:31.497 [2024-11-06 09:07:30.296072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:31.497 BaseBdev2 00:14:31.497 [2024-11-06 09:07:30.296206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.497 [ 00:14:31.497 { 00:14:31.497 "name": "BaseBdev2", 00:14:31.497 "aliases": [ 00:14:31.497 "e9352e55-c820-46e8-83b4-7e6bf736ec03" 00:14:31.497 ], 00:14:31.497 "product_name": "Malloc disk", 00:14:31.497 "block_size": 512, 00:14:31.497 "num_blocks": 65536, 00:14:31.497 "uuid": "e9352e55-c820-46e8-83b4-7e6bf736ec03", 00:14:31.497 "assigned_rate_limits": { 00:14:31.497 "rw_ios_per_sec": 0, 00:14:31.497 "rw_mbytes_per_sec": 0, 00:14:31.497 "r_mbytes_per_sec": 0, 00:14:31.497 "w_mbytes_per_sec": 0 00:14:31.497 }, 00:14:31.497 "claimed": true, 00:14:31.497 "claim_type": "exclusive_write", 00:14:31.497 "zoned": false, 00:14:31.497 "supported_io_types": { 00:14:31.497 "read": true, 00:14:31.497 "write": true, 00:14:31.497 "unmap": true, 00:14:31.497 "flush": true, 00:14:31.497 "reset": true, 00:14:31.497 "nvme_admin": false, 00:14:31.497 "nvme_io": false, 00:14:31.497 "nvme_io_md": false, 00:14:31.497 "write_zeroes": true, 00:14:31.497 "zcopy": true, 00:14:31.497 "get_zone_info": false, 00:14:31.497 "zone_management": false, 00:14:31.497 "zone_append": false, 00:14:31.497 "compare": false, 00:14:31.497 "compare_and_write": false, 00:14:31.497 "abort": true, 00:14:31.497 "seek_hole": false, 00:14:31.497 "seek_data": false, 00:14:31.497 "copy": true, 00:14:31.497 "nvme_iov_md": false 00:14:31.497 }, 00:14:31.497 "memory_domains": [ 00:14:31.497 { 00:14:31.497 "dma_device_id": "system", 00:14:31.497 "dma_device_type": 1 00:14:31.497 }, 00:14:31.497 { 00:14:31.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.497 "dma_device_type": 2 00:14:31.497 } 00:14:31.497 ], 00:14:31.497 "driver_specific": {} 00:14:31.497 } 00:14:31.497 ] 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.497 "name": "Existed_Raid", 00:14:31.497 "uuid": "1708cb41-0a1d-4b11-b76d-644742ea64dc", 00:14:31.497 "strip_size_kb": 64, 00:14:31.497 "state": "online", 00:14:31.497 "raid_level": "raid0", 00:14:31.497 "superblock": true, 00:14:31.497 "num_base_bdevs": 2, 00:14:31.497 "num_base_bdevs_discovered": 2, 00:14:31.497 "num_base_bdevs_operational": 2, 00:14:31.497 "base_bdevs_list": [ 00:14:31.497 { 00:14:31.497 "name": "BaseBdev1", 00:14:31.497 "uuid": "97eb2c1f-90ea-46bf-bcdc-a0b2c44b7997", 00:14:31.497 "is_configured": true, 00:14:31.497 "data_offset": 2048, 00:14:31.497 "data_size": 63488 00:14:31.497 }, 00:14:31.497 { 00:14:31.497 "name": "BaseBdev2", 00:14:31.497 "uuid": "e9352e55-c820-46e8-83b4-7e6bf736ec03", 00:14:31.497 "is_configured": true, 00:14:31.497 "data_offset": 2048, 00:14:31.497 "data_size": 63488 00:14:31.497 } 00:14:31.497 ] 00:14:31.497 }' 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.497 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.757 [2024-11-06 09:07:30.731188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.757 "name": "Existed_Raid", 00:14:31.757 "aliases": [ 00:14:31.757 "1708cb41-0a1d-4b11-b76d-644742ea64dc" 00:14:31.757 ], 00:14:31.757 "product_name": "Raid Volume", 00:14:31.757 "block_size": 512, 00:14:31.757 "num_blocks": 126976, 00:14:31.757 "uuid": "1708cb41-0a1d-4b11-b76d-644742ea64dc", 00:14:31.757 "assigned_rate_limits": { 00:14:31.757 "rw_ios_per_sec": 0, 00:14:31.757 "rw_mbytes_per_sec": 0, 00:14:31.757 "r_mbytes_per_sec": 0, 00:14:31.757 "w_mbytes_per_sec": 0 00:14:31.757 }, 00:14:31.757 "claimed": false, 00:14:31.757 "zoned": false, 00:14:31.757 "supported_io_types": { 00:14:31.757 "read": true, 00:14:31.757 "write": true, 00:14:31.757 "unmap": true, 00:14:31.757 "flush": true, 00:14:31.757 "reset": true, 00:14:31.757 "nvme_admin": false, 00:14:31.757 "nvme_io": false, 00:14:31.757 "nvme_io_md": false, 00:14:31.757 "write_zeroes": true, 00:14:31.757 "zcopy": false, 00:14:31.757 "get_zone_info": false, 00:14:31.757 "zone_management": false, 00:14:31.757 "zone_append": false, 00:14:31.757 "compare": false, 00:14:31.757 "compare_and_write": false, 00:14:31.757 "abort": false, 00:14:31.757 "seek_hole": false, 00:14:31.757 "seek_data": false, 00:14:31.757 "copy": false, 00:14:31.757 "nvme_iov_md": false 00:14:31.757 }, 00:14:31.757 "memory_domains": [ 00:14:31.757 { 00:14:31.757 "dma_device_id": "system", 00:14:31.757 "dma_device_type": 1 00:14:31.757 }, 00:14:31.757 { 00:14:31.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.757 "dma_device_type": 2 00:14:31.757 }, 00:14:31.757 { 00:14:31.757 "dma_device_id": "system", 00:14:31.757 "dma_device_type": 1 00:14:31.757 }, 00:14:31.757 { 00:14:31.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.757 "dma_device_type": 2 00:14:31.757 } 00:14:31.757 ], 00:14:31.757 "driver_specific": { 00:14:31.757 "raid": { 00:14:31.757 "uuid": "1708cb41-0a1d-4b11-b76d-644742ea64dc", 00:14:31.757 "strip_size_kb": 64, 00:14:31.757 "state": "online", 00:14:31.757 "raid_level": "raid0", 00:14:31.757 "superblock": true, 00:14:31.757 "num_base_bdevs": 2, 00:14:31.757 "num_base_bdevs_discovered": 2, 00:14:31.757 "num_base_bdevs_operational": 2, 00:14:31.757 "base_bdevs_list": [ 00:14:31.757 { 00:14:31.757 "name": "BaseBdev1", 00:14:31.757 "uuid": "97eb2c1f-90ea-46bf-bcdc-a0b2c44b7997", 00:14:31.757 "is_configured": true, 00:14:31.757 "data_offset": 2048, 00:14:31.757 "data_size": 63488 00:14:31.757 }, 00:14:31.757 { 00:14:31.757 "name": "BaseBdev2", 00:14:31.757 "uuid": "e9352e55-c820-46e8-83b4-7e6bf736ec03", 00:14:31.757 "is_configured": true, 00:14:31.757 "data_offset": 2048, 00:14:31.757 "data_size": 63488 00:14:31.757 } 00:14:31.757 ] 00:14:31.757 } 00:14:31.757 } 00:14:31.757 }' 00:14:31.757 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:32.017 BaseBdev2' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.017 09:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.017 [2024-11-06 09:07:30.930617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.017 [2024-11-06 09:07:30.930658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.017 [2024-11-06 09:07:30.930710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.017 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.276 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.276 "name": "Existed_Raid", 00:14:32.276 "uuid": "1708cb41-0a1d-4b11-b76d-644742ea64dc", 00:14:32.276 "strip_size_kb": 64, 00:14:32.276 "state": "offline", 00:14:32.276 "raid_level": "raid0", 00:14:32.276 "superblock": true, 00:14:32.276 "num_base_bdevs": 2, 00:14:32.276 "num_base_bdevs_discovered": 1, 00:14:32.276 "num_base_bdevs_operational": 1, 00:14:32.276 "base_bdevs_list": [ 00:14:32.276 { 00:14:32.276 "name": null, 00:14:32.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.276 "is_configured": false, 00:14:32.276 "data_offset": 0, 00:14:32.276 "data_size": 63488 00:14:32.276 }, 00:14:32.276 { 00:14:32.276 "name": "BaseBdev2", 00:14:32.276 "uuid": "e9352e55-c820-46e8-83b4-7e6bf736ec03", 00:14:32.276 "is_configured": true, 00:14:32.276 "data_offset": 2048, 00:14:32.276 "data_size": 63488 00:14:32.276 } 00:14:32.276 ] 00:14:32.276 }' 00:14:32.276 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.276 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.535 [2024-11-06 09:07:31.466512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.535 [2024-11-06 09:07:31.466576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:32.535 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60743 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60743 ']' 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60743 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60743 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.794 killing process with pid 60743 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60743' 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60743 00:14:32.794 [2024-11-06 09:07:31.655103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.794 09:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60743 00:14:32.794 [2024-11-06 09:07:31.671906] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.171 09:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:34.171 ************************************ 00:14:34.171 END TEST raid_state_function_test_sb 00:14:34.171 ************************************ 00:14:34.171 00:14:34.171 real 0m4.726s 00:14:34.171 user 0m6.679s 00:14:34.171 sys 0m0.891s 00:14:34.171 09:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:34.171 09:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.171 09:07:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:34.171 09:07:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:34.171 09:07:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:34.171 09:07:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.171 ************************************ 00:14:34.171 START TEST raid_superblock_test 00:14:34.171 ************************************ 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60995 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60995 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60995 ']' 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:34.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:34.171 09:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.171 [2024-11-06 09:07:32.958025] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:34.171 [2024-11-06 09:07:32.958348] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60995 ] 00:14:34.171 [2024-11-06 09:07:33.138053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.430 [2024-11-06 09:07:33.292245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.688 [2024-11-06 09:07:33.508005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.688 [2024-11-06 09:07:33.508067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.947 malloc1 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.947 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.947 [2024-11-06 09:07:33.845325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:34.948 [2024-11-06 09:07:33.845551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.948 [2024-11-06 09:07:33.845730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.948 [2024-11-06 09:07:33.845850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.948 [2024-11-06 09:07:33.848718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.948 [2024-11-06 09:07:33.848891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:34.948 pt1 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 malloc2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 [2024-11-06 09:07:33.905459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.948 [2024-11-06 09:07:33.905520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.948 [2024-11-06 09:07:33.905548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.948 [2024-11-06 09:07:33.905559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.948 [2024-11-06 09:07:33.907959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.948 [2024-11-06 09:07:33.907999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.948 pt2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 [2024-11-06 09:07:33.917522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.948 [2024-11-06 09:07:33.919653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.948 [2024-11-06 09:07:33.919806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:34.948 [2024-11-06 09:07:33.919820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.948 [2024-11-06 09:07:33.920085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:34.948 [2024-11-06 09:07:33.920238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:34.948 [2024-11-06 09:07:33.920251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:34.948 [2024-11-06 09:07:33.920422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.948 "name": "raid_bdev1", 00:14:34.948 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:34.948 "strip_size_kb": 64, 00:14:34.948 "state": "online", 00:14:34.948 "raid_level": "raid0", 00:14:34.948 "superblock": true, 00:14:34.948 "num_base_bdevs": 2, 00:14:34.948 "num_base_bdevs_discovered": 2, 00:14:34.948 "num_base_bdevs_operational": 2, 00:14:34.948 "base_bdevs_list": [ 00:14:34.948 { 00:14:34.948 "name": "pt1", 00:14:34.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.948 "is_configured": true, 00:14:34.948 "data_offset": 2048, 00:14:34.948 "data_size": 63488 00:14:34.948 }, 00:14:34.948 { 00:14:34.948 "name": "pt2", 00:14:34.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.948 "is_configured": true, 00:14:34.948 "data_offset": 2048, 00:14:34.948 "data_size": 63488 00:14:34.948 } 00:14:34.948 ] 00:14:34.948 }' 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.948 09:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.516 [2024-11-06 09:07:34.333774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.516 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.516 "name": "raid_bdev1", 00:14:35.516 "aliases": [ 00:14:35.516 "e955a667-2375-4358-a5db-341f5784c3db" 00:14:35.516 ], 00:14:35.516 "product_name": "Raid Volume", 00:14:35.516 "block_size": 512, 00:14:35.516 "num_blocks": 126976, 00:14:35.516 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:35.516 "assigned_rate_limits": { 00:14:35.516 "rw_ios_per_sec": 0, 00:14:35.516 "rw_mbytes_per_sec": 0, 00:14:35.516 "r_mbytes_per_sec": 0, 00:14:35.516 "w_mbytes_per_sec": 0 00:14:35.516 }, 00:14:35.516 "claimed": false, 00:14:35.516 "zoned": false, 00:14:35.516 "supported_io_types": { 00:14:35.516 "read": true, 00:14:35.516 "write": true, 00:14:35.516 "unmap": true, 00:14:35.516 "flush": true, 00:14:35.516 "reset": true, 00:14:35.516 "nvme_admin": false, 00:14:35.516 "nvme_io": false, 00:14:35.516 "nvme_io_md": false, 00:14:35.516 "write_zeroes": true, 00:14:35.516 "zcopy": false, 00:14:35.516 "get_zone_info": false, 00:14:35.516 "zone_management": false, 00:14:35.516 "zone_append": false, 00:14:35.516 "compare": false, 00:14:35.516 "compare_and_write": false, 00:14:35.516 "abort": false, 00:14:35.516 "seek_hole": false, 00:14:35.516 "seek_data": false, 00:14:35.516 "copy": false, 00:14:35.516 "nvme_iov_md": false 00:14:35.516 }, 00:14:35.516 "memory_domains": [ 00:14:35.516 { 00:14:35.516 "dma_device_id": "system", 00:14:35.516 "dma_device_type": 1 00:14:35.516 }, 00:14:35.516 { 00:14:35.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.516 "dma_device_type": 2 00:14:35.516 }, 00:14:35.516 { 00:14:35.516 "dma_device_id": "system", 00:14:35.516 "dma_device_type": 1 00:14:35.516 }, 00:14:35.516 { 00:14:35.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.517 "dma_device_type": 2 00:14:35.517 } 00:14:35.517 ], 00:14:35.517 "driver_specific": { 00:14:35.517 "raid": { 00:14:35.517 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:35.517 "strip_size_kb": 64, 00:14:35.517 "state": "online", 00:14:35.517 "raid_level": "raid0", 00:14:35.517 "superblock": true, 00:14:35.517 "num_base_bdevs": 2, 00:14:35.517 "num_base_bdevs_discovered": 2, 00:14:35.517 "num_base_bdevs_operational": 2, 00:14:35.517 "base_bdevs_list": [ 00:14:35.517 { 00:14:35.517 "name": "pt1", 00:14:35.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.517 "is_configured": true, 00:14:35.517 "data_offset": 2048, 00:14:35.517 "data_size": 63488 00:14:35.517 }, 00:14:35.517 { 00:14:35.517 "name": "pt2", 00:14:35.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.517 "is_configured": true, 00:14:35.517 "data_offset": 2048, 00:14:35.517 "data_size": 63488 00:14:35.517 } 00:14:35.517 ] 00:14:35.517 } 00:14:35.517 } 00:14:35.517 }' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:35.517 pt2' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.517 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:35.777 [2024-11-06 09:07:34.557387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e955a667-2375-4358-a5db-341f5784c3db 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e955a667-2375-4358-a5db-341f5784c3db ']' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 [2024-11-06 09:07:34.605039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.777 [2024-11-06 09:07:34.605067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.777 [2024-11-06 09:07:34.605146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.777 [2024-11-06 09:07:34.605193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.777 [2024-11-06 09:07:34.605207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 [2024-11-06 09:07:34.736907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:35.777 [2024-11-06 09:07:34.739167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:35.777 [2024-11-06 09:07:34.739233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:35.777 [2024-11-06 09:07:34.739310] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:35.777 [2024-11-06 09:07:34.739329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.777 [2024-11-06 09:07:34.739346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:35.777 request: 00:14:35.777 { 00:14:35.777 "name": "raid_bdev1", 00:14:35.777 "raid_level": "raid0", 00:14:35.777 "base_bdevs": [ 00:14:35.777 "malloc1", 00:14:35.777 "malloc2" 00:14:35.777 ], 00:14:35.777 "strip_size_kb": 64, 00:14:35.777 "superblock": false, 00:14:35.777 "method": "bdev_raid_create", 00:14:35.777 "req_id": 1 00:14:35.777 } 00:14:35.777 Got JSON-RPC error response 00:14:35.777 response: 00:14:35.777 { 00:14:35.777 "code": -17, 00:14:35.777 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:35.777 } 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.777 [2024-11-06 09:07:34.796789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.777 [2024-11-06 09:07:34.796853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.777 [2024-11-06 09:07:34.796877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:35.777 [2024-11-06 09:07:34.796892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.777 [2024-11-06 09:07:34.799457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.777 [2024-11-06 09:07:34.799618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.777 [2024-11-06 09:07:34.799725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:35.777 [2024-11-06 09:07:34.799794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:35.777 pt1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.777 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.036 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.036 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.036 "name": "raid_bdev1", 00:14:36.036 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:36.036 "strip_size_kb": 64, 00:14:36.036 "state": "configuring", 00:14:36.036 "raid_level": "raid0", 00:14:36.036 "superblock": true, 00:14:36.036 "num_base_bdevs": 2, 00:14:36.036 "num_base_bdevs_discovered": 1, 00:14:36.036 "num_base_bdevs_operational": 2, 00:14:36.036 "base_bdevs_list": [ 00:14:36.036 { 00:14:36.036 "name": "pt1", 00:14:36.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.036 "is_configured": true, 00:14:36.036 "data_offset": 2048, 00:14:36.036 "data_size": 63488 00:14:36.036 }, 00:14:36.036 { 00:14:36.036 "name": null, 00:14:36.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.037 "is_configured": false, 00:14:36.037 "data_offset": 2048, 00:14:36.037 "data_size": 63488 00:14:36.037 } 00:14:36.037 ] 00:14:36.037 }' 00:14:36.037 09:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.037 09:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.295 [2024-11-06 09:07:35.180435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.295 [2024-11-06 09:07:35.180645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.295 [2024-11-06 09:07:35.180680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:36.295 [2024-11-06 09:07:35.180695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.295 [2024-11-06 09:07:35.181178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.295 [2024-11-06 09:07:35.181202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.295 [2024-11-06 09:07:35.181313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:36.295 [2024-11-06 09:07:35.181341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.295 [2024-11-06 09:07:35.181465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:36.295 [2024-11-06 09:07:35.181478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:36.295 [2024-11-06 09:07:35.181784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:36.295 [2024-11-06 09:07:35.181958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:36.295 [2024-11-06 09:07:35.181975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:36.295 [2024-11-06 09:07:35.182129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.295 pt2 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.295 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.295 "name": "raid_bdev1", 00:14:36.295 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:36.295 "strip_size_kb": 64, 00:14:36.295 "state": "online", 00:14:36.295 "raid_level": "raid0", 00:14:36.295 "superblock": true, 00:14:36.295 "num_base_bdevs": 2, 00:14:36.295 "num_base_bdevs_discovered": 2, 00:14:36.295 "num_base_bdevs_operational": 2, 00:14:36.295 "base_bdevs_list": [ 00:14:36.295 { 00:14:36.295 "name": "pt1", 00:14:36.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.295 "is_configured": true, 00:14:36.295 "data_offset": 2048, 00:14:36.295 "data_size": 63488 00:14:36.295 }, 00:14:36.295 { 00:14:36.296 "name": "pt2", 00:14:36.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.296 "is_configured": true, 00:14:36.296 "data_offset": 2048, 00:14:36.296 "data_size": 63488 00:14:36.296 } 00:14:36.296 ] 00:14:36.296 }' 00:14:36.296 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.296 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.896 [2024-11-06 09:07:35.624038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.896 "name": "raid_bdev1", 00:14:36.896 "aliases": [ 00:14:36.896 "e955a667-2375-4358-a5db-341f5784c3db" 00:14:36.896 ], 00:14:36.896 "product_name": "Raid Volume", 00:14:36.896 "block_size": 512, 00:14:36.896 "num_blocks": 126976, 00:14:36.896 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:36.896 "assigned_rate_limits": { 00:14:36.896 "rw_ios_per_sec": 0, 00:14:36.896 "rw_mbytes_per_sec": 0, 00:14:36.896 "r_mbytes_per_sec": 0, 00:14:36.896 "w_mbytes_per_sec": 0 00:14:36.896 }, 00:14:36.896 "claimed": false, 00:14:36.896 "zoned": false, 00:14:36.896 "supported_io_types": { 00:14:36.896 "read": true, 00:14:36.896 "write": true, 00:14:36.896 "unmap": true, 00:14:36.896 "flush": true, 00:14:36.896 "reset": true, 00:14:36.896 "nvme_admin": false, 00:14:36.896 "nvme_io": false, 00:14:36.896 "nvme_io_md": false, 00:14:36.896 "write_zeroes": true, 00:14:36.896 "zcopy": false, 00:14:36.896 "get_zone_info": false, 00:14:36.896 "zone_management": false, 00:14:36.896 "zone_append": false, 00:14:36.896 "compare": false, 00:14:36.896 "compare_and_write": false, 00:14:36.896 "abort": false, 00:14:36.896 "seek_hole": false, 00:14:36.896 "seek_data": false, 00:14:36.896 "copy": false, 00:14:36.896 "nvme_iov_md": false 00:14:36.896 }, 00:14:36.896 "memory_domains": [ 00:14:36.896 { 00:14:36.896 "dma_device_id": "system", 00:14:36.896 "dma_device_type": 1 00:14:36.896 }, 00:14:36.896 { 00:14:36.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.896 "dma_device_type": 2 00:14:36.896 }, 00:14:36.896 { 00:14:36.896 "dma_device_id": "system", 00:14:36.896 "dma_device_type": 1 00:14:36.896 }, 00:14:36.896 { 00:14:36.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.896 "dma_device_type": 2 00:14:36.896 } 00:14:36.896 ], 00:14:36.896 "driver_specific": { 00:14:36.896 "raid": { 00:14:36.896 "uuid": "e955a667-2375-4358-a5db-341f5784c3db", 00:14:36.896 "strip_size_kb": 64, 00:14:36.896 "state": "online", 00:14:36.896 "raid_level": "raid0", 00:14:36.896 "superblock": true, 00:14:36.896 "num_base_bdevs": 2, 00:14:36.896 "num_base_bdevs_discovered": 2, 00:14:36.896 "num_base_bdevs_operational": 2, 00:14:36.896 "base_bdevs_list": [ 00:14:36.896 { 00:14:36.896 "name": "pt1", 00:14:36.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.896 "is_configured": true, 00:14:36.896 "data_offset": 2048, 00:14:36.896 "data_size": 63488 00:14:36.896 }, 00:14:36.896 { 00:14:36.896 "name": "pt2", 00:14:36.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.896 "is_configured": true, 00:14:36.896 "data_offset": 2048, 00:14:36.896 "data_size": 63488 00:14:36.896 } 00:14:36.896 ] 00:14:36.896 } 00:14:36.896 } 00:14:36.896 }' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:36.896 pt2' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.896 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:36.897 [2024-11-06 09:07:35.855686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e955a667-2375-4358-a5db-341f5784c3db '!=' e955a667-2375-4358-a5db-341f5784c3db ']' 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60995 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60995 ']' 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60995 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:36.897 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60995 00:14:37.155 killing process with pid 60995 00:14:37.155 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:37.155 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:37.155 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60995' 00:14:37.155 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60995 00:14:37.155 [2024-11-06 09:07:35.950140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.155 [2024-11-06 09:07:35.950246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.155 09:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60995 00:14:37.155 [2024-11-06 09:07:35.950310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.155 [2024-11-06 09:07:35.950326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:37.155 [2024-11-06 09:07:36.161611] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.532 09:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:38.532 00:14:38.532 real 0m4.429s 00:14:38.532 user 0m6.188s 00:14:38.532 sys 0m0.806s 00:14:38.532 09:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:38.532 09:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.532 ************************************ 00:14:38.532 END TEST raid_superblock_test 00:14:38.532 ************************************ 00:14:38.532 09:07:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:38.532 09:07:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:38.532 09:07:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:38.532 09:07:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.532 ************************************ 00:14:38.532 START TEST raid_read_error_test 00:14:38.532 ************************************ 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HSTBgFIlfz 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61201 00:14:38.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61201 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61201 ']' 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.532 09:07:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.532 [2024-11-06 09:07:37.470829] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:38.532 [2024-11-06 09:07:37.471159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61201 ] 00:14:38.791 [2024-11-06 09:07:37.634918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.791 [2024-11-06 09:07:37.756712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.049 [2024-11-06 09:07:37.965319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.049 [2024-11-06 09:07:37.965672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.309 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.309 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:39.309 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.309 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:39.309 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.309 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 BaseBdev1_malloc 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 true 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 [2024-11-06 09:07:38.395828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:39.569 [2024-11-06 09:07:38.395895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.569 [2024-11-06 09:07:38.395921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:39.569 [2024-11-06 09:07:38.395936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.569 [2024-11-06 09:07:38.398456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.569 [2024-11-06 09:07:38.398502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:39.569 BaseBdev1 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 BaseBdev2_malloc 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 true 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 [2024-11-06 09:07:38.464220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.569 [2024-11-06 09:07:38.464298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.569 [2024-11-06 09:07:38.464319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:39.569 [2024-11-06 09:07:38.464333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.569 [2024-11-06 09:07:38.466762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.569 [2024-11-06 09:07:38.466809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.569 BaseBdev2 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.569 [2024-11-06 09:07:38.476285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.569 [2024-11-06 09:07:38.478657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.569 [2024-11-06 09:07:38.478997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.569 [2024-11-06 09:07:38.479128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.569 [2024-11-06 09:07:38.479441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:39.569 [2024-11-06 09:07:38.479651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.569 [2024-11-06 09:07:38.479695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:39.569 [2024-11-06 09:07:38.479985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:39.569 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.570 "name": "raid_bdev1", 00:14:39.570 "uuid": "4ec99a58-c3d7-45ef-81e3-330dc8b7b07f", 00:14:39.570 "strip_size_kb": 64, 00:14:39.570 "state": "online", 00:14:39.570 "raid_level": "raid0", 00:14:39.570 "superblock": true, 00:14:39.570 "num_base_bdevs": 2, 00:14:39.570 "num_base_bdevs_discovered": 2, 00:14:39.570 "num_base_bdevs_operational": 2, 00:14:39.570 "base_bdevs_list": [ 00:14:39.570 { 00:14:39.570 "name": "BaseBdev1", 00:14:39.570 "uuid": "1df85a19-e1f6-546c-9c15-c048f43e64fc", 00:14:39.570 "is_configured": true, 00:14:39.570 "data_offset": 2048, 00:14:39.570 "data_size": 63488 00:14:39.570 }, 00:14:39.570 { 00:14:39.570 "name": "BaseBdev2", 00:14:39.570 "uuid": "231b78fa-6ace-5320-ab24-8d5c494f8454", 00:14:39.570 "is_configured": true, 00:14:39.570 "data_offset": 2048, 00:14:39.570 "data_size": 63488 00:14:39.570 } 00:14:39.570 ] 00:14:39.570 }' 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.570 09:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.138 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:40.138 09:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:40.138 [2024-11-06 09:07:38.984883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.076 "name": "raid_bdev1", 00:14:41.076 "uuid": "4ec99a58-c3d7-45ef-81e3-330dc8b7b07f", 00:14:41.076 "strip_size_kb": 64, 00:14:41.076 "state": "online", 00:14:41.076 "raid_level": "raid0", 00:14:41.076 "superblock": true, 00:14:41.076 "num_base_bdevs": 2, 00:14:41.076 "num_base_bdevs_discovered": 2, 00:14:41.076 "num_base_bdevs_operational": 2, 00:14:41.076 "base_bdevs_list": [ 00:14:41.076 { 00:14:41.076 "name": "BaseBdev1", 00:14:41.076 "uuid": "1df85a19-e1f6-546c-9c15-c048f43e64fc", 00:14:41.076 "is_configured": true, 00:14:41.076 "data_offset": 2048, 00:14:41.076 "data_size": 63488 00:14:41.076 }, 00:14:41.076 { 00:14:41.076 "name": "BaseBdev2", 00:14:41.076 "uuid": "231b78fa-6ace-5320-ab24-8d5c494f8454", 00:14:41.076 "is_configured": true, 00:14:41.076 "data_offset": 2048, 00:14:41.076 "data_size": 63488 00:14:41.076 } 00:14:41.076 ] 00:14:41.076 }' 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.076 09:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.335 09:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.335 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.335 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.335 [2024-11-06 09:07:40.332173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.335 [2024-11-06 09:07:40.332215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.335 { 00:14:41.335 "results": [ 00:14:41.335 { 00:14:41.335 "job": "raid_bdev1", 00:14:41.335 "core_mask": "0x1", 00:14:41.335 "workload": "randrw", 00:14:41.335 "percentage": 50, 00:14:41.335 "status": "finished", 00:14:41.335 "queue_depth": 1, 00:14:41.335 "io_size": 131072, 00:14:41.335 "runtime": 1.347143, 00:14:41.335 "iops": 15473.487224444621, 00:14:41.335 "mibps": 1934.1859030555777, 00:14:41.335 "io_failed": 1, 00:14:41.335 "io_timeout": 0, 00:14:41.335 "avg_latency_us": 89.58772933044662, 00:14:41.335 "min_latency_us": 26.730923694779115, 00:14:41.335 "max_latency_us": 1447.5823293172691 00:14:41.335 } 00:14:41.335 ], 00:14:41.335 "core_count": 1 00:14:41.335 } 00:14:41.335 [2024-11-06 09:07:40.335055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.335 [2024-11-06 09:07:40.335105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.336 [2024-11-06 09:07:40.335150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.336 [2024-11-06 09:07:40.335167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61201 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61201 ']' 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61201 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:41.336 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61201 00:14:41.595 killing process with pid 61201 00:14:41.595 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:41.595 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:41.595 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61201' 00:14:41.595 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61201 00:14:41.595 [2024-11-06 09:07:40.383935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.595 09:07:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61201 00:14:41.595 [2024-11-06 09:07:40.531355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HSTBgFIlfz 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:42.975 ************************************ 00:14:42.975 END TEST raid_read_error_test 00:14:42.975 ************************************ 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:42.975 00:14:42.975 real 0m4.364s 00:14:42.975 user 0m5.193s 00:14:42.975 sys 0m0.572s 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.975 09:07:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.975 09:07:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:42.975 09:07:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:42.975 09:07:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.975 09:07:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.975 ************************************ 00:14:42.975 START TEST raid_write_error_test 00:14:42.975 ************************************ 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MIlWOXRZXX 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61341 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61341 00:14:42.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61341 ']' 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:42.975 09:07:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.975 [2024-11-06 09:07:41.902083] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:42.975 [2024-11-06 09:07:41.902383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61341 ] 00:14:43.233 [2024-11-06 09:07:42.082375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.233 [2024-11-06 09:07:42.203991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.492 [2024-11-06 09:07:42.418379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.492 [2024-11-06 09:07:42.418612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.751 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.751 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:43.751 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:43.751 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.751 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.751 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 BaseBdev1_malloc 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 true 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 [2024-11-06 09:07:42.843732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:44.011 [2024-11-06 09:07:42.843926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.011 [2024-11-06 09:07:42.843960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:44.011 [2024-11-06 09:07:42.843975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.011 [2024-11-06 09:07:42.846508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.011 [2024-11-06 09:07:42.846557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.011 BaseBdev1 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 BaseBdev2_malloc 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 true 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 [2024-11-06 09:07:42.912314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.011 [2024-11-06 09:07:42.912373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.011 [2024-11-06 09:07:42.912394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:44.011 [2024-11-06 09:07:42.912408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.011 [2024-11-06 09:07:42.914900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.011 [2024-11-06 09:07:42.914947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.011 BaseBdev2 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 [2024-11-06 09:07:42.924367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.011 [2024-11-06 09:07:42.926528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.011 [2024-11-06 09:07:42.926731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.011 [2024-11-06 09:07:42.926753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.011 [2024-11-06 09:07:42.927025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:44.011 [2024-11-06 09:07:42.927184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.011 [2024-11-06 09:07:42.927198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:44.011 [2024-11-06 09:07:42.927373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.011 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.011 "name": "raid_bdev1", 00:14:44.011 "uuid": "796ecff7-009e-48be-93da-2a7c7f75edbc", 00:14:44.011 "strip_size_kb": 64, 00:14:44.011 "state": "online", 00:14:44.011 "raid_level": "raid0", 00:14:44.011 "superblock": true, 00:14:44.011 "num_base_bdevs": 2, 00:14:44.011 "num_base_bdevs_discovered": 2, 00:14:44.011 "num_base_bdevs_operational": 2, 00:14:44.011 "base_bdevs_list": [ 00:14:44.011 { 00:14:44.011 "name": "BaseBdev1", 00:14:44.011 "uuid": "33033bc3-1715-518a-8344-0d90f19106ec", 00:14:44.012 "is_configured": true, 00:14:44.012 "data_offset": 2048, 00:14:44.012 "data_size": 63488 00:14:44.012 }, 00:14:44.012 { 00:14:44.012 "name": "BaseBdev2", 00:14:44.012 "uuid": "00f2551d-e11a-54c1-b538-39c70400460d", 00:14:44.012 "is_configured": true, 00:14:44.012 "data_offset": 2048, 00:14:44.012 "data_size": 63488 00:14:44.012 } 00:14:44.012 ] 00:14:44.012 }' 00:14:44.012 09:07:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.012 09:07:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.585 09:07:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:44.585 09:07:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:44.585 [2024-11-06 09:07:43.524945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.522 "name": "raid_bdev1", 00:14:45.522 "uuid": "796ecff7-009e-48be-93da-2a7c7f75edbc", 00:14:45.522 "strip_size_kb": 64, 00:14:45.522 "state": "online", 00:14:45.522 "raid_level": "raid0", 00:14:45.522 "superblock": true, 00:14:45.522 "num_base_bdevs": 2, 00:14:45.522 "num_base_bdevs_discovered": 2, 00:14:45.522 "num_base_bdevs_operational": 2, 00:14:45.522 "base_bdevs_list": [ 00:14:45.522 { 00:14:45.522 "name": "BaseBdev1", 00:14:45.522 "uuid": "33033bc3-1715-518a-8344-0d90f19106ec", 00:14:45.522 "is_configured": true, 00:14:45.522 "data_offset": 2048, 00:14:45.522 "data_size": 63488 00:14:45.522 }, 00:14:45.522 { 00:14:45.522 "name": "BaseBdev2", 00:14:45.522 "uuid": "00f2551d-e11a-54c1-b538-39c70400460d", 00:14:45.522 "is_configured": true, 00:14:45.522 "data_offset": 2048, 00:14:45.522 "data_size": 63488 00:14:45.522 } 00:14:45.522 ] 00:14:45.522 }' 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.522 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.088 [2024-11-06 09:07:44.884023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.088 [2024-11-06 09:07:44.884068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.088 [2024-11-06 09:07:44.886928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.088 [2024-11-06 09:07:44.886983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.088 [2024-11-06 09:07:44.887019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.088 [2024-11-06 09:07:44.887034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:46.088 { 00:14:46.088 "results": [ 00:14:46.088 { 00:14:46.088 "job": "raid_bdev1", 00:14:46.088 "core_mask": "0x1", 00:14:46.088 "workload": "randrw", 00:14:46.088 "percentage": 50, 00:14:46.088 "status": "finished", 00:14:46.088 "queue_depth": 1, 00:14:46.088 "io_size": 131072, 00:14:46.088 "runtime": 1.359021, 00:14:46.088 "iops": 15293.361912729826, 00:14:46.088 "mibps": 1911.6702390912283, 00:14:46.088 "io_failed": 1, 00:14:46.088 "io_timeout": 0, 00:14:46.088 "avg_latency_us": 90.37561030747962, 00:14:46.088 "min_latency_us": 29.40401606425703, 00:14:46.088 "max_latency_us": 1572.6008032128514 00:14:46.088 } 00:14:46.088 ], 00:14:46.088 "core_count": 1 00:14:46.088 } 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61341 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61341 ']' 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61341 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61341 00:14:46.088 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:46.089 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:46.089 killing process with pid 61341 00:14:46.089 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61341' 00:14:46.089 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61341 00:14:46.089 [2024-11-06 09:07:44.935325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.089 09:07:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61341 00:14:46.089 [2024-11-06 09:07:45.082750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MIlWOXRZXX 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:47.463 09:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:47.463 00:14:47.463 real 0m4.571s 00:14:47.464 user 0m5.524s 00:14:47.464 sys 0m0.595s 00:14:47.464 09:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:47.464 ************************************ 00:14:47.464 END TEST raid_write_error_test 00:14:47.464 ************************************ 00:14:47.464 09:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.464 09:07:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:47.464 09:07:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:47.464 09:07:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:47.464 09:07:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:47.464 09:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.464 ************************************ 00:14:47.464 START TEST raid_state_function_test 00:14:47.464 ************************************ 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61485 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:47.464 Process raid pid: 61485 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61485' 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61485 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61485 ']' 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:47.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:47.464 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.722 [2024-11-06 09:07:46.557511] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:47.722 [2024-11-06 09:07:46.557665] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.722 [2024-11-06 09:07:46.747791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.981 [2024-11-06 09:07:46.887874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.240 [2024-11-06 09:07:47.122291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.240 [2024-11-06 09:07:47.122355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.498 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:48.498 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:48.498 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:48.498 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.498 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.499 [2024-11-06 09:07:47.482554] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.499 [2024-11-06 09:07:47.482619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.499 [2024-11-06 09:07:47.482633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.499 [2024-11-06 09:07:47.482647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.499 "name": "Existed_Raid", 00:14:48.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.499 "strip_size_kb": 64, 00:14:48.499 "state": "configuring", 00:14:48.499 "raid_level": "concat", 00:14:48.499 "superblock": false, 00:14:48.499 "num_base_bdevs": 2, 00:14:48.499 "num_base_bdevs_discovered": 0, 00:14:48.499 "num_base_bdevs_operational": 2, 00:14:48.499 "base_bdevs_list": [ 00:14:48.499 { 00:14:48.499 "name": "BaseBdev1", 00:14:48.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.499 "is_configured": false, 00:14:48.499 "data_offset": 0, 00:14:48.499 "data_size": 0 00:14:48.499 }, 00:14:48.499 { 00:14:48.499 "name": "BaseBdev2", 00:14:48.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.499 "is_configured": false, 00:14:48.499 "data_offset": 0, 00:14:48.499 "data_size": 0 00:14:48.499 } 00:14:48.499 ] 00:14:48.499 }' 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.499 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 [2024-11-06 09:07:47.922504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.066 [2024-11-06 09:07:47.922556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 [2024-11-06 09:07:47.930508] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.066 [2024-11-06 09:07:47.930565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.066 [2024-11-06 09:07:47.930581] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.066 [2024-11-06 09:07:47.930603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 [2024-11-06 09:07:47.977480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.066 BaseBdev1 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.066 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 [ 00:14:49.066 { 00:14:49.066 "name": "BaseBdev1", 00:14:49.066 "aliases": [ 00:14:49.066 "ff07ad4d-9c72-4ea9-ace1-9b8b42b7aff7" 00:14:49.066 ], 00:14:49.066 "product_name": "Malloc disk", 00:14:49.066 "block_size": 512, 00:14:49.066 "num_blocks": 65536, 00:14:49.066 "uuid": "ff07ad4d-9c72-4ea9-ace1-9b8b42b7aff7", 00:14:49.066 "assigned_rate_limits": { 00:14:49.066 "rw_ios_per_sec": 0, 00:14:49.066 "rw_mbytes_per_sec": 0, 00:14:49.066 "r_mbytes_per_sec": 0, 00:14:49.066 "w_mbytes_per_sec": 0 00:14:49.066 }, 00:14:49.066 "claimed": true, 00:14:49.066 "claim_type": "exclusive_write", 00:14:49.066 "zoned": false, 00:14:49.066 "supported_io_types": { 00:14:49.066 "read": true, 00:14:49.066 "write": true, 00:14:49.066 "unmap": true, 00:14:49.066 "flush": true, 00:14:49.066 "reset": true, 00:14:49.066 "nvme_admin": false, 00:14:49.066 "nvme_io": false, 00:14:49.066 "nvme_io_md": false, 00:14:49.066 "write_zeroes": true, 00:14:49.066 "zcopy": true, 00:14:49.066 "get_zone_info": false, 00:14:49.066 "zone_management": false, 00:14:49.066 "zone_append": false, 00:14:49.066 "compare": false, 00:14:49.066 "compare_and_write": false, 00:14:49.066 "abort": true, 00:14:49.066 "seek_hole": false, 00:14:49.066 "seek_data": false, 00:14:49.066 "copy": true, 00:14:49.066 "nvme_iov_md": false 00:14:49.066 }, 00:14:49.066 "memory_domains": [ 00:14:49.066 { 00:14:49.067 "dma_device_id": "system", 00:14:49.067 "dma_device_type": 1 00:14:49.067 }, 00:14:49.067 { 00:14:49.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.067 "dma_device_type": 2 00:14:49.067 } 00:14:49.067 ], 00:14:49.067 "driver_specific": {} 00:14:49.067 } 00:14:49.067 ] 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.067 "name": "Existed_Raid", 00:14:49.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.067 "strip_size_kb": 64, 00:14:49.067 "state": "configuring", 00:14:49.067 "raid_level": "concat", 00:14:49.067 "superblock": false, 00:14:49.067 "num_base_bdevs": 2, 00:14:49.067 "num_base_bdevs_discovered": 1, 00:14:49.067 "num_base_bdevs_operational": 2, 00:14:49.067 "base_bdevs_list": [ 00:14:49.067 { 00:14:49.067 "name": "BaseBdev1", 00:14:49.067 "uuid": "ff07ad4d-9c72-4ea9-ace1-9b8b42b7aff7", 00:14:49.067 "is_configured": true, 00:14:49.067 "data_offset": 0, 00:14:49.067 "data_size": 65536 00:14:49.067 }, 00:14:49.067 { 00:14:49.067 "name": "BaseBdev2", 00:14:49.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.067 "is_configured": false, 00:14:49.067 "data_offset": 0, 00:14:49.067 "data_size": 0 00:14:49.067 } 00:14:49.067 ] 00:14:49.067 }' 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.067 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.633 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.633 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.633 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.633 [2024-11-06 09:07:48.397076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.633 [2024-11-06 09:07:48.397140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:49.633 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.634 [2024-11-06 09:07:48.405144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.634 [2024-11-06 09:07:48.407415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.634 [2024-11-06 09:07:48.407459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.634 "name": "Existed_Raid", 00:14:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.634 "strip_size_kb": 64, 00:14:49.634 "state": "configuring", 00:14:49.634 "raid_level": "concat", 00:14:49.634 "superblock": false, 00:14:49.634 "num_base_bdevs": 2, 00:14:49.634 "num_base_bdevs_discovered": 1, 00:14:49.634 "num_base_bdevs_operational": 2, 00:14:49.634 "base_bdevs_list": [ 00:14:49.634 { 00:14:49.634 "name": "BaseBdev1", 00:14:49.634 "uuid": "ff07ad4d-9c72-4ea9-ace1-9b8b42b7aff7", 00:14:49.634 "is_configured": true, 00:14:49.634 "data_offset": 0, 00:14:49.634 "data_size": 65536 00:14:49.634 }, 00:14:49.634 { 00:14:49.634 "name": "BaseBdev2", 00:14:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.634 "is_configured": false, 00:14:49.634 "data_offset": 0, 00:14:49.634 "data_size": 0 00:14:49.634 } 00:14:49.634 ] 00:14:49.634 }' 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.634 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.892 [2024-11-06 09:07:48.853805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.892 [2024-11-06 09:07:48.853864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:49.892 [2024-11-06 09:07:48.853875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:49.892 [2024-11-06 09:07:48.854181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:49.892 [2024-11-06 09:07:48.854378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:49.892 [2024-11-06 09:07:48.854403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:49.892 [2024-11-06 09:07:48.854677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.892 BaseBdev2 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.892 [ 00:14:49.892 { 00:14:49.892 "name": "BaseBdev2", 00:14:49.892 "aliases": [ 00:14:49.892 "e1cadb6e-4b71-4d09-b778-daede3a55f8e" 00:14:49.892 ], 00:14:49.892 "product_name": "Malloc disk", 00:14:49.892 "block_size": 512, 00:14:49.892 "num_blocks": 65536, 00:14:49.892 "uuid": "e1cadb6e-4b71-4d09-b778-daede3a55f8e", 00:14:49.892 "assigned_rate_limits": { 00:14:49.892 "rw_ios_per_sec": 0, 00:14:49.892 "rw_mbytes_per_sec": 0, 00:14:49.892 "r_mbytes_per_sec": 0, 00:14:49.892 "w_mbytes_per_sec": 0 00:14:49.892 }, 00:14:49.892 "claimed": true, 00:14:49.892 "claim_type": "exclusive_write", 00:14:49.892 "zoned": false, 00:14:49.892 "supported_io_types": { 00:14:49.892 "read": true, 00:14:49.892 "write": true, 00:14:49.892 "unmap": true, 00:14:49.892 "flush": true, 00:14:49.892 "reset": true, 00:14:49.892 "nvme_admin": false, 00:14:49.892 "nvme_io": false, 00:14:49.892 "nvme_io_md": false, 00:14:49.892 "write_zeroes": true, 00:14:49.892 "zcopy": true, 00:14:49.892 "get_zone_info": false, 00:14:49.892 "zone_management": false, 00:14:49.892 "zone_append": false, 00:14:49.892 "compare": false, 00:14:49.892 "compare_and_write": false, 00:14:49.892 "abort": true, 00:14:49.892 "seek_hole": false, 00:14:49.892 "seek_data": false, 00:14:49.892 "copy": true, 00:14:49.892 "nvme_iov_md": false 00:14:49.892 }, 00:14:49.892 "memory_domains": [ 00:14:49.892 { 00:14:49.892 "dma_device_id": "system", 00:14:49.892 "dma_device_type": 1 00:14:49.892 }, 00:14:49.892 { 00:14:49.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.892 "dma_device_type": 2 00:14:49.892 } 00:14:49.892 ], 00:14:49.892 "driver_specific": {} 00:14:49.892 } 00:14:49.892 ] 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.892 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.893 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.151 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.151 "name": "Existed_Raid", 00:14:50.151 "uuid": "fae208e3-bf70-4a1a-802d-d7f25d2accde", 00:14:50.151 "strip_size_kb": 64, 00:14:50.151 "state": "online", 00:14:50.151 "raid_level": "concat", 00:14:50.151 "superblock": false, 00:14:50.151 "num_base_bdevs": 2, 00:14:50.151 "num_base_bdevs_discovered": 2, 00:14:50.151 "num_base_bdevs_operational": 2, 00:14:50.151 "base_bdevs_list": [ 00:14:50.151 { 00:14:50.151 "name": "BaseBdev1", 00:14:50.151 "uuid": "ff07ad4d-9c72-4ea9-ace1-9b8b42b7aff7", 00:14:50.151 "is_configured": true, 00:14:50.151 "data_offset": 0, 00:14:50.151 "data_size": 65536 00:14:50.151 }, 00:14:50.151 { 00:14:50.151 "name": "BaseBdev2", 00:14:50.151 "uuid": "e1cadb6e-4b71-4d09-b778-daede3a55f8e", 00:14:50.151 "is_configured": true, 00:14:50.151 "data_offset": 0, 00:14:50.151 "data_size": 65536 00:14:50.151 } 00:14:50.151 ] 00:14:50.151 }' 00:14:50.151 09:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.151 09:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.409 [2024-11-06 09:07:49.338008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.409 "name": "Existed_Raid", 00:14:50.409 "aliases": [ 00:14:50.409 "fae208e3-bf70-4a1a-802d-d7f25d2accde" 00:14:50.409 ], 00:14:50.409 "product_name": "Raid Volume", 00:14:50.409 "block_size": 512, 00:14:50.409 "num_blocks": 131072, 00:14:50.409 "uuid": "fae208e3-bf70-4a1a-802d-d7f25d2accde", 00:14:50.409 "assigned_rate_limits": { 00:14:50.409 "rw_ios_per_sec": 0, 00:14:50.409 "rw_mbytes_per_sec": 0, 00:14:50.409 "r_mbytes_per_sec": 0, 00:14:50.409 "w_mbytes_per_sec": 0 00:14:50.409 }, 00:14:50.409 "claimed": false, 00:14:50.409 "zoned": false, 00:14:50.409 "supported_io_types": { 00:14:50.409 "read": true, 00:14:50.409 "write": true, 00:14:50.409 "unmap": true, 00:14:50.409 "flush": true, 00:14:50.409 "reset": true, 00:14:50.409 "nvme_admin": false, 00:14:50.409 "nvme_io": false, 00:14:50.409 "nvme_io_md": false, 00:14:50.409 "write_zeroes": true, 00:14:50.409 "zcopy": false, 00:14:50.409 "get_zone_info": false, 00:14:50.409 "zone_management": false, 00:14:50.409 "zone_append": false, 00:14:50.409 "compare": false, 00:14:50.409 "compare_and_write": false, 00:14:50.409 "abort": false, 00:14:50.409 "seek_hole": false, 00:14:50.409 "seek_data": false, 00:14:50.409 "copy": false, 00:14:50.409 "nvme_iov_md": false 00:14:50.409 }, 00:14:50.409 "memory_domains": [ 00:14:50.409 { 00:14:50.409 "dma_device_id": "system", 00:14:50.409 "dma_device_type": 1 00:14:50.409 }, 00:14:50.409 { 00:14:50.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.409 "dma_device_type": 2 00:14:50.409 }, 00:14:50.409 { 00:14:50.409 "dma_device_id": "system", 00:14:50.409 "dma_device_type": 1 00:14:50.409 }, 00:14:50.409 { 00:14:50.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.409 "dma_device_type": 2 00:14:50.409 } 00:14:50.409 ], 00:14:50.409 "driver_specific": { 00:14:50.409 "raid": { 00:14:50.409 "uuid": "fae208e3-bf70-4a1a-802d-d7f25d2accde", 00:14:50.409 "strip_size_kb": 64, 00:14:50.409 "state": "online", 00:14:50.409 "raid_level": "concat", 00:14:50.409 "superblock": false, 00:14:50.409 "num_base_bdevs": 2, 00:14:50.409 "num_base_bdevs_discovered": 2, 00:14:50.409 "num_base_bdevs_operational": 2, 00:14:50.409 "base_bdevs_list": [ 00:14:50.409 { 00:14:50.409 "name": "BaseBdev1", 00:14:50.409 "uuid": "ff07ad4d-9c72-4ea9-ace1-9b8b42b7aff7", 00:14:50.409 "is_configured": true, 00:14:50.409 "data_offset": 0, 00:14:50.409 "data_size": 65536 00:14:50.409 }, 00:14:50.409 { 00:14:50.409 "name": "BaseBdev2", 00:14:50.409 "uuid": "e1cadb6e-4b71-4d09-b778-daede3a55f8e", 00:14:50.409 "is_configured": true, 00:14:50.409 "data_offset": 0, 00:14:50.409 "data_size": 65536 00:14:50.409 } 00:14:50.409 ] 00:14:50.409 } 00:14:50.409 } 00:14:50.409 }' 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:50.409 BaseBdev2' 00:14:50.409 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.671 [2024-11-06 09:07:49.585766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.671 [2024-11-06 09:07:49.585804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.671 [2024-11-06 09:07:49.585857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.671 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.672 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.930 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.930 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.930 "name": "Existed_Raid", 00:14:50.930 "uuid": "fae208e3-bf70-4a1a-802d-d7f25d2accde", 00:14:50.930 "strip_size_kb": 64, 00:14:50.930 "state": "offline", 00:14:50.930 "raid_level": "concat", 00:14:50.930 "superblock": false, 00:14:50.930 "num_base_bdevs": 2, 00:14:50.930 "num_base_bdevs_discovered": 1, 00:14:50.930 "num_base_bdevs_operational": 1, 00:14:50.930 "base_bdevs_list": [ 00:14:50.930 { 00:14:50.930 "name": null, 00:14:50.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.930 "is_configured": false, 00:14:50.930 "data_offset": 0, 00:14:50.930 "data_size": 65536 00:14:50.930 }, 00:14:50.930 { 00:14:50.930 "name": "BaseBdev2", 00:14:50.930 "uuid": "e1cadb6e-4b71-4d09-b778-daede3a55f8e", 00:14:50.930 "is_configured": true, 00:14:50.930 "data_offset": 0, 00:14:50.930 "data_size": 65536 00:14:50.930 } 00:14:50.930 ] 00:14:50.930 }' 00:14:50.930 09:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.930 09:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.188 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:51.188 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.188 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.189 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.189 [2024-11-06 09:07:50.173448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.189 [2024-11-06 09:07:50.173508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61485 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61485 ']' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61485 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61485 00:14:51.446 killing process with pid 61485 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61485' 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61485 00:14:51.446 [2024-11-06 09:07:50.386047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.446 09:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61485 00:14:51.446 [2024-11-06 09:07:50.404092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:52.867 00:14:52.867 real 0m5.169s 00:14:52.867 user 0m7.410s 00:14:52.867 sys 0m0.926s 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.867 ************************************ 00:14:52.867 END TEST raid_state_function_test 00:14:52.867 ************************************ 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.867 09:07:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:52.867 09:07:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:52.867 09:07:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.867 09:07:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.867 ************************************ 00:14:52.867 START TEST raid_state_function_test_sb 00:14:52.867 ************************************ 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:52.867 Process raid pid: 61738 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61738 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61738' 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61738 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61738 ']' 00:14:52.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.867 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.867 [2024-11-06 09:07:51.796064] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:52.867 [2024-11-06 09:07:51.796429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.125 [2024-11-06 09:07:51.980357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.125 [2024-11-06 09:07:52.109537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.383 [2024-11-06 09:07:52.335210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.383 [2024-11-06 09:07:52.335472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.641 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.641 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:53.641 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:53.641 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.641 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.641 [2024-11-06 09:07:52.674768] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.641 [2024-11-06 09:07:52.674964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.641 [2024-11-06 09:07:52.674988] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.641 [2024-11-06 09:07:52.675002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.899 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.900 "name": "Existed_Raid", 00:14:53.900 "uuid": "c287e8d1-d24d-4d99-a14c-f9a41e8ee542", 00:14:53.900 "strip_size_kb": 64, 00:14:53.900 "state": "configuring", 00:14:53.900 "raid_level": "concat", 00:14:53.900 "superblock": true, 00:14:53.900 "num_base_bdevs": 2, 00:14:53.900 "num_base_bdevs_discovered": 0, 00:14:53.900 "num_base_bdevs_operational": 2, 00:14:53.900 "base_bdevs_list": [ 00:14:53.900 { 00:14:53.900 "name": "BaseBdev1", 00:14:53.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.900 "is_configured": false, 00:14:53.900 "data_offset": 0, 00:14:53.900 "data_size": 0 00:14:53.900 }, 00:14:53.900 { 00:14:53.900 "name": "BaseBdev2", 00:14:53.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.900 "is_configured": false, 00:14:53.900 "data_offset": 0, 00:14:53.900 "data_size": 0 00:14:53.900 } 00:14:53.900 ] 00:14:53.900 }' 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.900 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.171 [2024-11-06 09:07:53.098159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.171 [2024-11-06 09:07:53.098199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.171 [2024-11-06 09:07:53.106157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.171 [2024-11-06 09:07:53.106210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.171 [2024-11-06 09:07:53.106224] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.171 [2024-11-06 09:07:53.106244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.171 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.172 [2024-11-06 09:07:53.153126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.172 BaseBdev1 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.172 [ 00:14:54.172 { 00:14:54.172 "name": "BaseBdev1", 00:14:54.172 "aliases": [ 00:14:54.172 "06290bd7-81b9-4dee-8de4-3d10777a5085" 00:14:54.172 ], 00:14:54.172 "product_name": "Malloc disk", 00:14:54.172 "block_size": 512, 00:14:54.172 "num_blocks": 65536, 00:14:54.172 "uuid": "06290bd7-81b9-4dee-8de4-3d10777a5085", 00:14:54.172 "assigned_rate_limits": { 00:14:54.172 "rw_ios_per_sec": 0, 00:14:54.172 "rw_mbytes_per_sec": 0, 00:14:54.172 "r_mbytes_per_sec": 0, 00:14:54.172 "w_mbytes_per_sec": 0 00:14:54.172 }, 00:14:54.172 "claimed": true, 00:14:54.172 "claim_type": "exclusive_write", 00:14:54.172 "zoned": false, 00:14:54.172 "supported_io_types": { 00:14:54.172 "read": true, 00:14:54.172 "write": true, 00:14:54.172 "unmap": true, 00:14:54.172 "flush": true, 00:14:54.172 "reset": true, 00:14:54.172 "nvme_admin": false, 00:14:54.172 "nvme_io": false, 00:14:54.172 "nvme_io_md": false, 00:14:54.172 "write_zeroes": true, 00:14:54.172 "zcopy": true, 00:14:54.172 "get_zone_info": false, 00:14:54.172 "zone_management": false, 00:14:54.172 "zone_append": false, 00:14:54.172 "compare": false, 00:14:54.172 "compare_and_write": false, 00:14:54.172 "abort": true, 00:14:54.172 "seek_hole": false, 00:14:54.172 "seek_data": false, 00:14:54.172 "copy": true, 00:14:54.172 "nvme_iov_md": false 00:14:54.172 }, 00:14:54.172 "memory_domains": [ 00:14:54.172 { 00:14:54.172 "dma_device_id": "system", 00:14:54.172 "dma_device_type": 1 00:14:54.172 }, 00:14:54.172 { 00:14:54.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.172 "dma_device_type": 2 00:14:54.172 } 00:14:54.172 ], 00:14:54.172 "driver_specific": {} 00:14:54.172 } 00:14:54.172 ] 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.172 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.430 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.430 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.430 "name": "Existed_Raid", 00:14:54.430 "uuid": "73d61b2f-0910-4191-90e6-9ae178ebb8b8", 00:14:54.430 "strip_size_kb": 64, 00:14:54.430 "state": "configuring", 00:14:54.430 "raid_level": "concat", 00:14:54.430 "superblock": true, 00:14:54.430 "num_base_bdevs": 2, 00:14:54.430 "num_base_bdevs_discovered": 1, 00:14:54.430 "num_base_bdevs_operational": 2, 00:14:54.430 "base_bdevs_list": [ 00:14:54.430 { 00:14:54.430 "name": "BaseBdev1", 00:14:54.430 "uuid": "06290bd7-81b9-4dee-8de4-3d10777a5085", 00:14:54.430 "is_configured": true, 00:14:54.430 "data_offset": 2048, 00:14:54.430 "data_size": 63488 00:14:54.430 }, 00:14:54.430 { 00:14:54.430 "name": "BaseBdev2", 00:14:54.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.430 "is_configured": false, 00:14:54.430 "data_offset": 0, 00:14:54.430 "data_size": 0 00:14:54.430 } 00:14:54.430 ] 00:14:54.430 }' 00:14:54.430 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.430 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 [2024-11-06 09:07:53.592558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.688 [2024-11-06 09:07:53.592724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 [2024-11-06 09:07:53.600629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.688 [2024-11-06 09:07:53.602916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.688 [2024-11-06 09:07:53.602960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.688 "name": "Existed_Raid", 00:14:54.688 "uuid": "43d9a870-4023-4c4d-a0eb-e2643d327b54", 00:14:54.688 "strip_size_kb": 64, 00:14:54.688 "state": "configuring", 00:14:54.688 "raid_level": "concat", 00:14:54.688 "superblock": true, 00:14:54.688 "num_base_bdevs": 2, 00:14:54.688 "num_base_bdevs_discovered": 1, 00:14:54.688 "num_base_bdevs_operational": 2, 00:14:54.688 "base_bdevs_list": [ 00:14:54.688 { 00:14:54.688 "name": "BaseBdev1", 00:14:54.688 "uuid": "06290bd7-81b9-4dee-8de4-3d10777a5085", 00:14:54.688 "is_configured": true, 00:14:54.688 "data_offset": 2048, 00:14:54.688 "data_size": 63488 00:14:54.688 }, 00:14:54.688 { 00:14:54.688 "name": "BaseBdev2", 00:14:54.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.688 "is_configured": false, 00:14:54.688 "data_offset": 0, 00:14:54.688 "data_size": 0 00:14:54.688 } 00:14:54.688 ] 00:14:54.688 }' 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.688 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.255 [2024-11-06 09:07:54.058521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.255 [2024-11-06 09:07:54.058758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:55.255 [2024-11-06 09:07:54.058776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:55.255 BaseBdev2 00:14:55.255 [2024-11-06 09:07:54.059084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:55.255 [2024-11-06 09:07:54.059305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:55.255 [2024-11-06 09:07:54.059336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:55.255 [2024-11-06 09:07:54.059513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.255 [ 00:14:55.255 { 00:14:55.255 "name": "BaseBdev2", 00:14:55.255 "aliases": [ 00:14:55.255 "e7faa79e-4cea-4648-b01e-5747de1fdf4d" 00:14:55.255 ], 00:14:55.255 "product_name": "Malloc disk", 00:14:55.255 "block_size": 512, 00:14:55.255 "num_blocks": 65536, 00:14:55.255 "uuid": "e7faa79e-4cea-4648-b01e-5747de1fdf4d", 00:14:55.255 "assigned_rate_limits": { 00:14:55.255 "rw_ios_per_sec": 0, 00:14:55.255 "rw_mbytes_per_sec": 0, 00:14:55.255 "r_mbytes_per_sec": 0, 00:14:55.255 "w_mbytes_per_sec": 0 00:14:55.255 }, 00:14:55.255 "claimed": true, 00:14:55.255 "claim_type": "exclusive_write", 00:14:55.255 "zoned": false, 00:14:55.255 "supported_io_types": { 00:14:55.255 "read": true, 00:14:55.255 "write": true, 00:14:55.255 "unmap": true, 00:14:55.255 "flush": true, 00:14:55.255 "reset": true, 00:14:55.255 "nvme_admin": false, 00:14:55.255 "nvme_io": false, 00:14:55.255 "nvme_io_md": false, 00:14:55.255 "write_zeroes": true, 00:14:55.255 "zcopy": true, 00:14:55.255 "get_zone_info": false, 00:14:55.255 "zone_management": false, 00:14:55.255 "zone_append": false, 00:14:55.255 "compare": false, 00:14:55.255 "compare_and_write": false, 00:14:55.255 "abort": true, 00:14:55.255 "seek_hole": false, 00:14:55.255 "seek_data": false, 00:14:55.255 "copy": true, 00:14:55.255 "nvme_iov_md": false 00:14:55.255 }, 00:14:55.255 "memory_domains": [ 00:14:55.255 { 00:14:55.255 "dma_device_id": "system", 00:14:55.255 "dma_device_type": 1 00:14:55.255 }, 00:14:55.255 { 00:14:55.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.255 "dma_device_type": 2 00:14:55.255 } 00:14:55.255 ], 00:14:55.255 "driver_specific": {} 00:14:55.255 } 00:14:55.255 ] 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.255 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.256 "name": "Existed_Raid", 00:14:55.256 "uuid": "43d9a870-4023-4c4d-a0eb-e2643d327b54", 00:14:55.256 "strip_size_kb": 64, 00:14:55.256 "state": "online", 00:14:55.256 "raid_level": "concat", 00:14:55.256 "superblock": true, 00:14:55.256 "num_base_bdevs": 2, 00:14:55.256 "num_base_bdevs_discovered": 2, 00:14:55.256 "num_base_bdevs_operational": 2, 00:14:55.256 "base_bdevs_list": [ 00:14:55.256 { 00:14:55.256 "name": "BaseBdev1", 00:14:55.256 "uuid": "06290bd7-81b9-4dee-8de4-3d10777a5085", 00:14:55.256 "is_configured": true, 00:14:55.256 "data_offset": 2048, 00:14:55.256 "data_size": 63488 00:14:55.256 }, 00:14:55.256 { 00:14:55.256 "name": "BaseBdev2", 00:14:55.256 "uuid": "e7faa79e-4cea-4648-b01e-5747de1fdf4d", 00:14:55.256 "is_configured": true, 00:14:55.256 "data_offset": 2048, 00:14:55.256 "data_size": 63488 00:14:55.256 } 00:14:55.256 ] 00:14:55.256 }' 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.256 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.514 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.773 [2024-11-06 09:07:54.554471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.773 "name": "Existed_Raid", 00:14:55.773 "aliases": [ 00:14:55.773 "43d9a870-4023-4c4d-a0eb-e2643d327b54" 00:14:55.773 ], 00:14:55.773 "product_name": "Raid Volume", 00:14:55.773 "block_size": 512, 00:14:55.773 "num_blocks": 126976, 00:14:55.773 "uuid": "43d9a870-4023-4c4d-a0eb-e2643d327b54", 00:14:55.773 "assigned_rate_limits": { 00:14:55.773 "rw_ios_per_sec": 0, 00:14:55.773 "rw_mbytes_per_sec": 0, 00:14:55.773 "r_mbytes_per_sec": 0, 00:14:55.773 "w_mbytes_per_sec": 0 00:14:55.773 }, 00:14:55.773 "claimed": false, 00:14:55.773 "zoned": false, 00:14:55.773 "supported_io_types": { 00:14:55.773 "read": true, 00:14:55.773 "write": true, 00:14:55.773 "unmap": true, 00:14:55.773 "flush": true, 00:14:55.773 "reset": true, 00:14:55.773 "nvme_admin": false, 00:14:55.773 "nvme_io": false, 00:14:55.773 "nvme_io_md": false, 00:14:55.773 "write_zeroes": true, 00:14:55.773 "zcopy": false, 00:14:55.773 "get_zone_info": false, 00:14:55.773 "zone_management": false, 00:14:55.773 "zone_append": false, 00:14:55.773 "compare": false, 00:14:55.773 "compare_and_write": false, 00:14:55.773 "abort": false, 00:14:55.773 "seek_hole": false, 00:14:55.773 "seek_data": false, 00:14:55.773 "copy": false, 00:14:55.773 "nvme_iov_md": false 00:14:55.773 }, 00:14:55.773 "memory_domains": [ 00:14:55.773 { 00:14:55.773 "dma_device_id": "system", 00:14:55.773 "dma_device_type": 1 00:14:55.773 }, 00:14:55.773 { 00:14:55.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.773 "dma_device_type": 2 00:14:55.773 }, 00:14:55.773 { 00:14:55.773 "dma_device_id": "system", 00:14:55.773 "dma_device_type": 1 00:14:55.773 }, 00:14:55.773 { 00:14:55.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.773 "dma_device_type": 2 00:14:55.773 } 00:14:55.773 ], 00:14:55.773 "driver_specific": { 00:14:55.773 "raid": { 00:14:55.773 "uuid": "43d9a870-4023-4c4d-a0eb-e2643d327b54", 00:14:55.773 "strip_size_kb": 64, 00:14:55.773 "state": "online", 00:14:55.773 "raid_level": "concat", 00:14:55.773 "superblock": true, 00:14:55.773 "num_base_bdevs": 2, 00:14:55.773 "num_base_bdevs_discovered": 2, 00:14:55.773 "num_base_bdevs_operational": 2, 00:14:55.773 "base_bdevs_list": [ 00:14:55.773 { 00:14:55.773 "name": "BaseBdev1", 00:14:55.773 "uuid": "06290bd7-81b9-4dee-8de4-3d10777a5085", 00:14:55.773 "is_configured": true, 00:14:55.773 "data_offset": 2048, 00:14:55.773 "data_size": 63488 00:14:55.773 }, 00:14:55.773 { 00:14:55.773 "name": "BaseBdev2", 00:14:55.773 "uuid": "e7faa79e-4cea-4648-b01e-5747de1fdf4d", 00:14:55.773 "is_configured": true, 00:14:55.773 "data_offset": 2048, 00:14:55.773 "data_size": 63488 00:14:55.773 } 00:14:55.773 ] 00:14:55.773 } 00:14:55.773 } 00:14:55.773 }' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:55.773 BaseBdev2' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.773 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.773 [2024-11-06 09:07:54.725998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.773 [2024-11-06 09:07:54.726038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.773 [2024-11-06 09:07:54.726098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.032 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.032 "name": "Existed_Raid", 00:14:56.032 "uuid": "43d9a870-4023-4c4d-a0eb-e2643d327b54", 00:14:56.032 "strip_size_kb": 64, 00:14:56.032 "state": "offline", 00:14:56.032 "raid_level": "concat", 00:14:56.032 "superblock": true, 00:14:56.032 "num_base_bdevs": 2, 00:14:56.032 "num_base_bdevs_discovered": 1, 00:14:56.032 "num_base_bdevs_operational": 1, 00:14:56.032 "base_bdevs_list": [ 00:14:56.032 { 00:14:56.032 "name": null, 00:14:56.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.032 "is_configured": false, 00:14:56.032 "data_offset": 0, 00:14:56.032 "data_size": 63488 00:14:56.032 }, 00:14:56.032 { 00:14:56.032 "name": "BaseBdev2", 00:14:56.032 "uuid": "e7faa79e-4cea-4648-b01e-5747de1fdf4d", 00:14:56.032 "is_configured": true, 00:14:56.032 "data_offset": 2048, 00:14:56.032 "data_size": 63488 00:14:56.032 } 00:14:56.032 ] 00:14:56.032 }' 00:14:56.033 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.033 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.291 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 [2024-11-06 09:07:55.240513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.291 [2024-11-06 09:07:55.240576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:56.550 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61738 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61738 ']' 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61738 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61738 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:56.551 killing process with pid 61738 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61738' 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61738 00:14:56.551 [2024-11-06 09:07:55.422982] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.551 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61738 00:14:56.551 [2024-11-06 09:07:55.440232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.926 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:57.926 00:14:57.926 real 0m4.867s 00:14:57.926 user 0m6.941s 00:14:57.926 sys 0m0.860s 00:14:57.926 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.926 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.926 ************************************ 00:14:57.926 END TEST raid_state_function_test_sb 00:14:57.926 ************************************ 00:14:57.926 09:07:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:57.926 09:07:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:57.926 09:07:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:57.926 09:07:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.926 ************************************ 00:14:57.926 START TEST raid_superblock_test 00:14:57.926 ************************************ 00:14:57.926 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:14:57.926 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:57.926 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:57.926 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:57.926 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:57.926 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61990 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61990 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61990 ']' 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:57.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:57.927 09:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.927 [2024-11-06 09:07:56.722006] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:14:57.927 [2024-11-06 09:07:56.722140] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61990 ] 00:14:57.927 [2024-11-06 09:07:56.903607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.184 [2024-11-06 09:07:57.025166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.442 [2024-11-06 09:07:57.232225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.442 [2024-11-06 09:07:57.232307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.701 malloc1 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.701 [2024-11-06 09:07:57.646337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:58.701 [2024-11-06 09:07:57.646414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.701 [2024-11-06 09:07:57.646444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:58.701 [2024-11-06 09:07:57.646458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.701 [2024-11-06 09:07:57.649048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.701 [2024-11-06 09:07:57.649094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:58.701 pt1 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.701 malloc2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.701 [2024-11-06 09:07:57.705050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.701 [2024-11-06 09:07:57.705123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.701 [2024-11-06 09:07:57.705152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:58.701 [2024-11-06 09:07:57.705165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.701 [2024-11-06 09:07:57.707752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.701 [2024-11-06 09:07:57.707796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.701 pt2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.701 [2024-11-06 09:07:57.717108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:58.701 [2024-11-06 09:07:57.719369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.701 [2024-11-06 09:07:57.719539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:58.701 [2024-11-06 09:07:57.719554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.701 [2024-11-06 09:07:57.719834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:58.701 [2024-11-06 09:07:57.719990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:58.701 [2024-11-06 09:07:57.720006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:58.701 [2024-11-06 09:07:57.720164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.701 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.702 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.702 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.702 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.702 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.961 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.961 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.961 "name": "raid_bdev1", 00:14:58.961 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:14:58.961 "strip_size_kb": 64, 00:14:58.961 "state": "online", 00:14:58.961 "raid_level": "concat", 00:14:58.961 "superblock": true, 00:14:58.961 "num_base_bdevs": 2, 00:14:58.961 "num_base_bdevs_discovered": 2, 00:14:58.961 "num_base_bdevs_operational": 2, 00:14:58.961 "base_bdevs_list": [ 00:14:58.961 { 00:14:58.961 "name": "pt1", 00:14:58.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.961 "is_configured": true, 00:14:58.961 "data_offset": 2048, 00:14:58.961 "data_size": 63488 00:14:58.961 }, 00:14:58.961 { 00:14:58.961 "name": "pt2", 00:14:58.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.961 "is_configured": true, 00:14:58.961 "data_offset": 2048, 00:14:58.961 "data_size": 63488 00:14:58.961 } 00:14:58.961 ] 00:14:58.961 }' 00:14:58.961 09:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.961 09:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.219 [2024-11-06 09:07:58.148753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.219 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.219 "name": "raid_bdev1", 00:14:59.219 "aliases": [ 00:14:59.219 "925935ab-12cc-4c8a-aec6-b69e28edd887" 00:14:59.219 ], 00:14:59.219 "product_name": "Raid Volume", 00:14:59.219 "block_size": 512, 00:14:59.219 "num_blocks": 126976, 00:14:59.219 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:14:59.219 "assigned_rate_limits": { 00:14:59.219 "rw_ios_per_sec": 0, 00:14:59.219 "rw_mbytes_per_sec": 0, 00:14:59.219 "r_mbytes_per_sec": 0, 00:14:59.219 "w_mbytes_per_sec": 0 00:14:59.219 }, 00:14:59.219 "claimed": false, 00:14:59.219 "zoned": false, 00:14:59.219 "supported_io_types": { 00:14:59.219 "read": true, 00:14:59.219 "write": true, 00:14:59.219 "unmap": true, 00:14:59.219 "flush": true, 00:14:59.219 "reset": true, 00:14:59.219 "nvme_admin": false, 00:14:59.219 "nvme_io": false, 00:14:59.219 "nvme_io_md": false, 00:14:59.219 "write_zeroes": true, 00:14:59.219 "zcopy": false, 00:14:59.219 "get_zone_info": false, 00:14:59.219 "zone_management": false, 00:14:59.219 "zone_append": false, 00:14:59.219 "compare": false, 00:14:59.219 "compare_and_write": false, 00:14:59.219 "abort": false, 00:14:59.219 "seek_hole": false, 00:14:59.219 "seek_data": false, 00:14:59.219 "copy": false, 00:14:59.219 "nvme_iov_md": false 00:14:59.219 }, 00:14:59.219 "memory_domains": [ 00:14:59.219 { 00:14:59.219 "dma_device_id": "system", 00:14:59.219 "dma_device_type": 1 00:14:59.219 }, 00:14:59.219 { 00:14:59.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.219 "dma_device_type": 2 00:14:59.219 }, 00:14:59.220 { 00:14:59.220 "dma_device_id": "system", 00:14:59.220 "dma_device_type": 1 00:14:59.220 }, 00:14:59.220 { 00:14:59.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.220 "dma_device_type": 2 00:14:59.220 } 00:14:59.220 ], 00:14:59.220 "driver_specific": { 00:14:59.220 "raid": { 00:14:59.220 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:14:59.220 "strip_size_kb": 64, 00:14:59.220 "state": "online", 00:14:59.220 "raid_level": "concat", 00:14:59.220 "superblock": true, 00:14:59.220 "num_base_bdevs": 2, 00:14:59.220 "num_base_bdevs_discovered": 2, 00:14:59.220 "num_base_bdevs_operational": 2, 00:14:59.220 "base_bdevs_list": [ 00:14:59.220 { 00:14:59.220 "name": "pt1", 00:14:59.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.220 "is_configured": true, 00:14:59.220 "data_offset": 2048, 00:14:59.220 "data_size": 63488 00:14:59.220 }, 00:14:59.220 { 00:14:59.220 "name": "pt2", 00:14:59.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.220 "is_configured": true, 00:14:59.220 "data_offset": 2048, 00:14:59.220 "data_size": 63488 00:14:59.220 } 00:14:59.220 ] 00:14:59.220 } 00:14:59.220 } 00:14:59.220 }' 00:14:59.220 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.220 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:59.220 pt2' 00:14:59.220 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:59.498 [2024-11-06 09:07:58.356724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=925935ab-12cc-4c8a-aec6-b69e28edd887 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 925935ab-12cc-4c8a-aec6-b69e28edd887 ']' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 [2024-11-06 09:07:58.384388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.498 [2024-11-06 09:07:58.384419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.498 [2024-11-06 09:07:58.384512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.498 [2024-11-06 09:07:58.384563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.498 [2024-11-06 09:07:58.384578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 [2024-11-06 09:07:58.520444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:59.498 [2024-11-06 09:07:58.522732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:59.498 [2024-11-06 09:07:58.522815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:59.498 [2024-11-06 09:07:58.522876] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:59.498 [2024-11-06 09:07:58.522895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.498 [2024-11-06 09:07:58.522909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:59.498 request: 00:14:59.498 { 00:14:59.498 "name": "raid_bdev1", 00:14:59.498 "raid_level": "concat", 00:14:59.498 "base_bdevs": [ 00:14:59.498 "malloc1", 00:14:59.498 "malloc2" 00:14:59.498 ], 00:14:59.498 "strip_size_kb": 64, 00:14:59.498 "superblock": false, 00:14:59.498 "method": "bdev_raid_create", 00:14:59.498 "req_id": 1 00:14:59.498 } 00:14:59.498 Got JSON-RPC error response 00:14:59.498 response: 00:14:59.498 { 00:14:59.498 "code": -17, 00:14:59.498 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:59.498 } 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.498 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.757 [2024-11-06 09:07:58.588426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.757 [2024-11-06 09:07:58.588501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.757 [2024-11-06 09:07:58.588527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:59.757 [2024-11-06 09:07:58.588543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.757 [2024-11-06 09:07:58.591170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.757 [2024-11-06 09:07:58.591218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.757 [2024-11-06 09:07:58.591325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:59.757 [2024-11-06 09:07:58.591394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.757 pt1 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.757 "name": "raid_bdev1", 00:14:59.757 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:14:59.757 "strip_size_kb": 64, 00:14:59.757 "state": "configuring", 00:14:59.757 "raid_level": "concat", 00:14:59.757 "superblock": true, 00:14:59.757 "num_base_bdevs": 2, 00:14:59.757 "num_base_bdevs_discovered": 1, 00:14:59.757 "num_base_bdevs_operational": 2, 00:14:59.757 "base_bdevs_list": [ 00:14:59.757 { 00:14:59.757 "name": "pt1", 00:14:59.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.757 "is_configured": true, 00:14:59.757 "data_offset": 2048, 00:14:59.757 "data_size": 63488 00:14:59.757 }, 00:14:59.757 { 00:14:59.757 "name": null, 00:14:59.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.757 "is_configured": false, 00:14:59.757 "data_offset": 2048, 00:14:59.757 "data_size": 63488 00:14:59.757 } 00:14:59.757 ] 00:14:59.757 }' 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.757 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.016 [2024-11-06 09:07:59.012449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.016 [2024-11-06 09:07:59.012529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.016 [2024-11-06 09:07:59.012555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:00.016 [2024-11-06 09:07:59.012571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.016 [2024-11-06 09:07:59.013072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.016 [2024-11-06 09:07:59.013106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.016 [2024-11-06 09:07:59.013196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:00.016 [2024-11-06 09:07:59.013223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.016 [2024-11-06 09:07:59.013371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:00.016 [2024-11-06 09:07:59.013385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:00.016 [2024-11-06 09:07:59.013658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:00.016 [2024-11-06 09:07:59.013804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:00.016 [2024-11-06 09:07:59.013815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:00.016 [2024-11-06 09:07:59.013960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.016 pt2 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.016 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.275 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.275 "name": "raid_bdev1", 00:15:00.275 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:15:00.275 "strip_size_kb": 64, 00:15:00.275 "state": "online", 00:15:00.275 "raid_level": "concat", 00:15:00.275 "superblock": true, 00:15:00.275 "num_base_bdevs": 2, 00:15:00.275 "num_base_bdevs_discovered": 2, 00:15:00.275 "num_base_bdevs_operational": 2, 00:15:00.275 "base_bdevs_list": [ 00:15:00.275 { 00:15:00.275 "name": "pt1", 00:15:00.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.275 "is_configured": true, 00:15:00.275 "data_offset": 2048, 00:15:00.275 "data_size": 63488 00:15:00.275 }, 00:15:00.275 { 00:15:00.275 "name": "pt2", 00:15:00.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.275 "is_configured": true, 00:15:00.275 "data_offset": 2048, 00:15:00.275 "data_size": 63488 00:15:00.275 } 00:15:00.275 ] 00:15:00.275 }' 00:15:00.275 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.275 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.534 [2024-11-06 09:07:59.496714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.534 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.534 "name": "raid_bdev1", 00:15:00.534 "aliases": [ 00:15:00.534 "925935ab-12cc-4c8a-aec6-b69e28edd887" 00:15:00.534 ], 00:15:00.534 "product_name": "Raid Volume", 00:15:00.534 "block_size": 512, 00:15:00.534 "num_blocks": 126976, 00:15:00.534 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:15:00.534 "assigned_rate_limits": { 00:15:00.534 "rw_ios_per_sec": 0, 00:15:00.534 "rw_mbytes_per_sec": 0, 00:15:00.534 "r_mbytes_per_sec": 0, 00:15:00.534 "w_mbytes_per_sec": 0 00:15:00.534 }, 00:15:00.534 "claimed": false, 00:15:00.534 "zoned": false, 00:15:00.534 "supported_io_types": { 00:15:00.534 "read": true, 00:15:00.534 "write": true, 00:15:00.534 "unmap": true, 00:15:00.534 "flush": true, 00:15:00.534 "reset": true, 00:15:00.534 "nvme_admin": false, 00:15:00.534 "nvme_io": false, 00:15:00.534 "nvme_io_md": false, 00:15:00.534 "write_zeroes": true, 00:15:00.534 "zcopy": false, 00:15:00.534 "get_zone_info": false, 00:15:00.534 "zone_management": false, 00:15:00.534 "zone_append": false, 00:15:00.534 "compare": false, 00:15:00.534 "compare_and_write": false, 00:15:00.534 "abort": false, 00:15:00.534 "seek_hole": false, 00:15:00.534 "seek_data": false, 00:15:00.534 "copy": false, 00:15:00.534 "nvme_iov_md": false 00:15:00.534 }, 00:15:00.534 "memory_domains": [ 00:15:00.534 { 00:15:00.534 "dma_device_id": "system", 00:15:00.534 "dma_device_type": 1 00:15:00.535 }, 00:15:00.535 { 00:15:00.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.535 "dma_device_type": 2 00:15:00.535 }, 00:15:00.535 { 00:15:00.535 "dma_device_id": "system", 00:15:00.535 "dma_device_type": 1 00:15:00.535 }, 00:15:00.535 { 00:15:00.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.535 "dma_device_type": 2 00:15:00.535 } 00:15:00.535 ], 00:15:00.535 "driver_specific": { 00:15:00.535 "raid": { 00:15:00.535 "uuid": "925935ab-12cc-4c8a-aec6-b69e28edd887", 00:15:00.535 "strip_size_kb": 64, 00:15:00.535 "state": "online", 00:15:00.535 "raid_level": "concat", 00:15:00.535 "superblock": true, 00:15:00.535 "num_base_bdevs": 2, 00:15:00.535 "num_base_bdevs_discovered": 2, 00:15:00.535 "num_base_bdevs_operational": 2, 00:15:00.535 "base_bdevs_list": [ 00:15:00.535 { 00:15:00.535 "name": "pt1", 00:15:00.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.535 "is_configured": true, 00:15:00.535 "data_offset": 2048, 00:15:00.535 "data_size": 63488 00:15:00.535 }, 00:15:00.535 { 00:15:00.535 "name": "pt2", 00:15:00.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.535 "is_configured": true, 00:15:00.535 "data_offset": 2048, 00:15:00.535 "data_size": 63488 00:15:00.535 } 00:15:00.535 ] 00:15:00.535 } 00:15:00.535 } 00:15:00.535 }' 00:15:00.535 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.796 pt2' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:00.796 [2024-11-06 09:07:59.712674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 925935ab-12cc-4c8a-aec6-b69e28edd887 '!=' 925935ab-12cc-4c8a-aec6-b69e28edd887 ']' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61990 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61990 ']' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61990 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61990 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61990' 00:15:00.796 killing process with pid 61990 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61990 00:15:00.796 [2024-11-06 09:07:59.792064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.796 [2024-11-06 09:07:59.792170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.796 [2024-11-06 09:07:59.792225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.796 [2024-11-06 09:07:59.792239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:00.796 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61990 00:15:01.067 [2024-11-06 09:08:00.019520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.441 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:02.441 00:15:02.441 real 0m4.592s 00:15:02.441 user 0m6.409s 00:15:02.441 sys 0m0.840s 00:15:02.441 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:02.441 ************************************ 00:15:02.441 END TEST raid_superblock_test 00:15:02.441 ************************************ 00:15:02.441 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.441 09:08:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:02.441 09:08:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:02.441 09:08:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:02.441 09:08:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.441 ************************************ 00:15:02.441 START TEST raid_read_error_test 00:15:02.441 ************************************ 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9K5bKmBI07 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62196 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62196 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62196 ']' 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:02.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:02.441 09:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.441 [2024-11-06 09:08:01.409479] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:02.441 [2024-11-06 09:08:01.409638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62196 ] 00:15:02.699 [2024-11-06 09:08:01.597414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.699 [2024-11-06 09:08:01.725772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.957 [2024-11-06 09:08:01.955160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.957 [2024-11-06 09:08:01.955233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 BaseBdev1_malloc 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 true 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 [2024-11-06 09:08:02.373623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:03.524 [2024-11-06 09:08:02.373697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.524 [2024-11-06 09:08:02.373733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:03.524 [2024-11-06 09:08:02.373767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.524 [2024-11-06 09:08:02.376503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.524 [2024-11-06 09:08:02.376556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.524 BaseBdev1 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 BaseBdev2_malloc 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 true 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 [2024-11-06 09:08:02.444418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:03.524 [2024-11-06 09:08:02.444490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.524 [2024-11-06 09:08:02.444521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:03.524 [2024-11-06 09:08:02.444543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.524 [2024-11-06 09:08:02.447346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.524 [2024-11-06 09:08:02.447511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.524 BaseBdev2 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 [2024-11-06 09:08:02.456476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.524 [2024-11-06 09:08:02.458765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.524 [2024-11-06 09:08:02.458978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:03.524 [2024-11-06 09:08:02.458997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.524 [2024-11-06 09:08:02.459265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:03.524 [2024-11-06 09:08:02.459482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:03.524 [2024-11-06 09:08:02.459497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:03.524 [2024-11-06 09:08:02.459679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.524 "name": "raid_bdev1", 00:15:03.524 "uuid": "1c2d0b18-5104-4e83-b47e-2483dd5b98a7", 00:15:03.524 "strip_size_kb": 64, 00:15:03.524 "state": "online", 00:15:03.524 "raid_level": "concat", 00:15:03.524 "superblock": true, 00:15:03.524 "num_base_bdevs": 2, 00:15:03.524 "num_base_bdevs_discovered": 2, 00:15:03.524 "num_base_bdevs_operational": 2, 00:15:03.524 "base_bdevs_list": [ 00:15:03.524 { 00:15:03.524 "name": "BaseBdev1", 00:15:03.524 "uuid": "bcb808e5-4216-5dbf-9931-dffd53fa780b", 00:15:03.524 "is_configured": true, 00:15:03.524 "data_offset": 2048, 00:15:03.524 "data_size": 63488 00:15:03.524 }, 00:15:03.524 { 00:15:03.524 "name": "BaseBdev2", 00:15:03.524 "uuid": "00dd67fc-c4eb-5581-b0de-f6e3f165de40", 00:15:03.524 "is_configured": true, 00:15:03.524 "data_offset": 2048, 00:15:03.524 "data_size": 63488 00:15:03.524 } 00:15:03.524 ] 00:15:03.524 }' 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.524 09:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.097 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:04.097 09:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:04.097 [2024-11-06 09:08:03.046096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.030 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.030 "name": "raid_bdev1", 00:15:05.031 "uuid": "1c2d0b18-5104-4e83-b47e-2483dd5b98a7", 00:15:05.031 "strip_size_kb": 64, 00:15:05.031 "state": "online", 00:15:05.031 "raid_level": "concat", 00:15:05.031 "superblock": true, 00:15:05.031 "num_base_bdevs": 2, 00:15:05.031 "num_base_bdevs_discovered": 2, 00:15:05.031 "num_base_bdevs_operational": 2, 00:15:05.031 "base_bdevs_list": [ 00:15:05.031 { 00:15:05.031 "name": "BaseBdev1", 00:15:05.031 "uuid": "bcb808e5-4216-5dbf-9931-dffd53fa780b", 00:15:05.031 "is_configured": true, 00:15:05.031 "data_offset": 2048, 00:15:05.031 "data_size": 63488 00:15:05.031 }, 00:15:05.031 { 00:15:05.031 "name": "BaseBdev2", 00:15:05.031 "uuid": "00dd67fc-c4eb-5581-b0de-f6e3f165de40", 00:15:05.031 "is_configured": true, 00:15:05.031 "data_offset": 2048, 00:15:05.031 "data_size": 63488 00:15:05.031 } 00:15:05.031 ] 00:15:05.031 }' 00:15:05.031 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.031 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.596 09:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.596 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.596 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.596 [2024-11-06 09:08:04.427333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.596 [2024-11-06 09:08:04.427374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.596 [2024-11-06 09:08:04.430227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.596 [2024-11-06 09:08:04.430438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.596 [2024-11-06 09:08:04.430491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.596 [2024-11-06 09:08:04.430511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:05.596 { 00:15:05.596 "results": [ 00:15:05.596 { 00:15:05.596 "job": "raid_bdev1", 00:15:05.596 "core_mask": "0x1", 00:15:05.596 "workload": "randrw", 00:15:05.596 "percentage": 50, 00:15:05.596 "status": "finished", 00:15:05.596 "queue_depth": 1, 00:15:05.596 "io_size": 131072, 00:15:05.596 "runtime": 1.38101, 00:15:05.596 "iops": 15514.00786380982, 00:15:05.596 "mibps": 1939.2509829762275, 00:15:05.596 "io_failed": 1, 00:15:05.596 "io_timeout": 0, 00:15:05.596 "avg_latency_us": 89.02445664296314, 00:15:05.596 "min_latency_us": 29.19839357429719, 00:15:05.596 "max_latency_us": 1566.0208835341366 00:15:05.596 } 00:15:05.596 ], 00:15:05.596 "core_count": 1 00:15:05.596 } 00:15:05.596 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62196 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62196 ']' 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62196 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62196 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62196' 00:15:05.597 killing process with pid 62196 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62196 00:15:05.597 [2024-11-06 09:08:04.485171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.597 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62196 00:15:05.597 [2024-11-06 09:08:04.632807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9K5bKmBI07 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:15:06.972 00:15:06.972 real 0m4.621s 00:15:06.972 user 0m5.613s 00:15:06.972 sys 0m0.625s 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:06.972 ************************************ 00:15:06.972 END TEST raid_read_error_test 00:15:06.972 ************************************ 00:15:06.972 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.972 09:08:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:06.972 09:08:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:06.972 09:08:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:06.972 09:08:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.972 ************************************ 00:15:06.972 START TEST raid_write_error_test 00:15:06.972 ************************************ 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:06.972 09:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Dnr4A5lx7S 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62347 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62347 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62347 ']' 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.972 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:06.973 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.973 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:06.973 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.231 [2024-11-06 09:08:06.102252] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:07.231 [2024-11-06 09:08:06.102409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62347 ] 00:15:07.488 [2024-11-06 09:08:06.287495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.488 [2024-11-06 09:08:06.416141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.746 [2024-11-06 09:08:06.638185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.746 [2024-11-06 09:08:06.638233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.004 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:08.004 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:08.004 09:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:08.004 09:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.005 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.005 09:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.005 BaseBdev1_malloc 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.005 true 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.005 [2024-11-06 09:08:07.029666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:08.005 [2024-11-06 09:08:07.029729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.005 [2024-11-06 09:08:07.029754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:08.005 [2024-11-06 09:08:07.029771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.005 [2024-11-06 09:08:07.032388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.005 [2024-11-06 09:08:07.032435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.005 BaseBdev1 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.005 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.263 BaseBdev2_malloc 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.263 true 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.263 [2024-11-06 09:08:07.093601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:08.263 [2024-11-06 09:08:07.093666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.263 [2024-11-06 09:08:07.093686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:08.263 [2024-11-06 09:08:07.093702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.263 [2024-11-06 09:08:07.096262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.263 [2024-11-06 09:08:07.096327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.263 BaseBdev2 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.263 [2024-11-06 09:08:07.101666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.263 [2024-11-06 09:08:07.103931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.263 [2024-11-06 09:08:07.104144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:08.263 [2024-11-06 09:08:07.104164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:08.263 [2024-11-06 09:08:07.104452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:08.263 [2024-11-06 09:08:07.104642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:08.263 [2024-11-06 09:08:07.104659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:08.263 [2024-11-06 09:08:07.104826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.263 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.263 "name": "raid_bdev1", 00:15:08.263 "uuid": "20f41fee-24f1-49d3-9d8a-ad9d5210cbc4", 00:15:08.263 "strip_size_kb": 64, 00:15:08.264 "state": "online", 00:15:08.264 "raid_level": "concat", 00:15:08.264 "superblock": true, 00:15:08.264 "num_base_bdevs": 2, 00:15:08.264 "num_base_bdevs_discovered": 2, 00:15:08.264 "num_base_bdevs_operational": 2, 00:15:08.264 "base_bdevs_list": [ 00:15:08.264 { 00:15:08.264 "name": "BaseBdev1", 00:15:08.264 "uuid": "d9010fa5-2c63-5171-8c87-79c78a28d12f", 00:15:08.264 "is_configured": true, 00:15:08.264 "data_offset": 2048, 00:15:08.264 "data_size": 63488 00:15:08.264 }, 00:15:08.264 { 00:15:08.264 "name": "BaseBdev2", 00:15:08.264 "uuid": "d6359ff0-2a4c-5ff0-b7cd-64a488651733", 00:15:08.264 "is_configured": true, 00:15:08.264 "data_offset": 2048, 00:15:08.264 "data_size": 63488 00:15:08.264 } 00:15:08.264 ] 00:15:08.264 }' 00:15:08.264 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.264 09:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.522 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:08.522 09:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:08.780 [2024-11-06 09:08:07.634707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.713 "name": "raid_bdev1", 00:15:09.713 "uuid": "20f41fee-24f1-49d3-9d8a-ad9d5210cbc4", 00:15:09.713 "strip_size_kb": 64, 00:15:09.713 "state": "online", 00:15:09.713 "raid_level": "concat", 00:15:09.713 "superblock": true, 00:15:09.713 "num_base_bdevs": 2, 00:15:09.713 "num_base_bdevs_discovered": 2, 00:15:09.713 "num_base_bdevs_operational": 2, 00:15:09.713 "base_bdevs_list": [ 00:15:09.713 { 00:15:09.713 "name": "BaseBdev1", 00:15:09.713 "uuid": "d9010fa5-2c63-5171-8c87-79c78a28d12f", 00:15:09.713 "is_configured": true, 00:15:09.713 "data_offset": 2048, 00:15:09.713 "data_size": 63488 00:15:09.713 }, 00:15:09.713 { 00:15:09.713 "name": "BaseBdev2", 00:15:09.713 "uuid": "d6359ff0-2a4c-5ff0-b7cd-64a488651733", 00:15:09.713 "is_configured": true, 00:15:09.713 "data_offset": 2048, 00:15:09.713 "data_size": 63488 00:15:09.713 } 00:15:09.713 ] 00:15:09.713 }' 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.713 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.972 [2024-11-06 09:08:08.961734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.972 [2024-11-06 09:08:08.961776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.972 [2024-11-06 09:08:08.964399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.972 [2024-11-06 09:08:08.964443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.972 [2024-11-06 09:08:08.964475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.972 [2024-11-06 09:08:08.964495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:09.972 { 00:15:09.972 "results": [ 00:15:09.972 { 00:15:09.972 "job": "raid_bdev1", 00:15:09.972 "core_mask": "0x1", 00:15:09.972 "workload": "randrw", 00:15:09.972 "percentage": 50, 00:15:09.972 "status": "finished", 00:15:09.972 "queue_depth": 1, 00:15:09.972 "io_size": 131072, 00:15:09.972 "runtime": 1.32694, 00:15:09.972 "iops": 15545.540868464286, 00:15:09.972 "mibps": 1943.1926085580358, 00:15:09.972 "io_failed": 1, 00:15:09.972 "io_timeout": 0, 00:15:09.972 "avg_latency_us": 88.91122346772323, 00:15:09.972 "min_latency_us": 27.142168674698794, 00:15:09.972 "max_latency_us": 1447.5823293172691 00:15:09.972 } 00:15:09.972 ], 00:15:09.972 "core_count": 1 00:15:09.972 } 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62347 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62347 ']' 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62347 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:09.972 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62347 00:15:10.234 killing process with pid 62347 00:15:10.234 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:10.234 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:10.234 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62347' 00:15:10.234 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62347 00:15:10.234 [2024-11-06 09:08:09.014776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.234 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62347 00:15:10.234 [2024-11-06 09:08:09.153899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Dnr4A5lx7S 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:15:11.613 00:15:11.613 real 0m4.360s 00:15:11.613 user 0m5.234s 00:15:11.613 sys 0m0.569s 00:15:11.613 ************************************ 00:15:11.613 END TEST raid_write_error_test 00:15:11.613 ************************************ 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.613 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.613 09:08:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:11.613 09:08:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:11.613 09:08:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:11.613 09:08:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.613 09:08:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.613 ************************************ 00:15:11.613 START TEST raid_state_function_test 00:15:11.613 ************************************ 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62485 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:11.613 Process raid pid: 62485 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62485' 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62485 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62485 ']' 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:11.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:11.613 09:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.613 [2024-11-06 09:08:10.514972] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:11.613 [2024-11-06 09:08:10.515359] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.872 [2024-11-06 09:08:10.701056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.872 [2024-11-06 09:08:10.823391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.131 [2024-11-06 09:08:11.033707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.131 [2024-11-06 09:08:11.033759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.699 [2024-11-06 09:08:11.476466] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.699 [2024-11-06 09:08:11.476533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.699 [2024-11-06 09:08:11.476547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.699 [2024-11-06 09:08:11.476562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.699 "name": "Existed_Raid", 00:15:12.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.699 "strip_size_kb": 0, 00:15:12.699 "state": "configuring", 00:15:12.699 "raid_level": "raid1", 00:15:12.699 "superblock": false, 00:15:12.699 "num_base_bdevs": 2, 00:15:12.699 "num_base_bdevs_discovered": 0, 00:15:12.699 "num_base_bdevs_operational": 2, 00:15:12.699 "base_bdevs_list": [ 00:15:12.699 { 00:15:12.699 "name": "BaseBdev1", 00:15:12.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.699 "is_configured": false, 00:15:12.699 "data_offset": 0, 00:15:12.699 "data_size": 0 00:15:12.699 }, 00:15:12.699 { 00:15:12.699 "name": "BaseBdev2", 00:15:12.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.699 "is_configured": false, 00:15:12.699 "data_offset": 0, 00:15:12.699 "data_size": 0 00:15:12.699 } 00:15:12.699 ] 00:15:12.699 }' 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.699 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.957 [2024-11-06 09:08:11.935845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.957 [2024-11-06 09:08:11.935886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.957 [2024-11-06 09:08:11.943805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.957 [2024-11-06 09:08:11.943975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.957 [2024-11-06 09:08:11.944069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.957 [2024-11-06 09:08:11.944120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.957 [2024-11-06 09:08:11.991486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.957 BaseBdev1 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.957 09:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 [ 00:15:13.216 { 00:15:13.216 "name": "BaseBdev1", 00:15:13.216 "aliases": [ 00:15:13.216 "1e3f17cc-0918-4ac6-af87-e326093baa17" 00:15:13.216 ], 00:15:13.216 "product_name": "Malloc disk", 00:15:13.216 "block_size": 512, 00:15:13.216 "num_blocks": 65536, 00:15:13.216 "uuid": "1e3f17cc-0918-4ac6-af87-e326093baa17", 00:15:13.216 "assigned_rate_limits": { 00:15:13.216 "rw_ios_per_sec": 0, 00:15:13.216 "rw_mbytes_per_sec": 0, 00:15:13.216 "r_mbytes_per_sec": 0, 00:15:13.216 "w_mbytes_per_sec": 0 00:15:13.216 }, 00:15:13.216 "claimed": true, 00:15:13.216 "claim_type": "exclusive_write", 00:15:13.216 "zoned": false, 00:15:13.216 "supported_io_types": { 00:15:13.216 "read": true, 00:15:13.216 "write": true, 00:15:13.216 "unmap": true, 00:15:13.216 "flush": true, 00:15:13.216 "reset": true, 00:15:13.216 "nvme_admin": false, 00:15:13.216 "nvme_io": false, 00:15:13.216 "nvme_io_md": false, 00:15:13.216 "write_zeroes": true, 00:15:13.216 "zcopy": true, 00:15:13.216 "get_zone_info": false, 00:15:13.216 "zone_management": false, 00:15:13.216 "zone_append": false, 00:15:13.216 "compare": false, 00:15:13.216 "compare_and_write": false, 00:15:13.216 "abort": true, 00:15:13.216 "seek_hole": false, 00:15:13.216 "seek_data": false, 00:15:13.216 "copy": true, 00:15:13.216 "nvme_iov_md": false 00:15:13.216 }, 00:15:13.216 "memory_domains": [ 00:15:13.216 { 00:15:13.216 "dma_device_id": "system", 00:15:13.216 "dma_device_type": 1 00:15:13.216 }, 00:15:13.216 { 00:15:13.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.216 "dma_device_type": 2 00:15:13.216 } 00:15:13.216 ], 00:15:13.216 "driver_specific": {} 00:15:13.216 } 00:15:13.216 ] 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.216 "name": "Existed_Raid", 00:15:13.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.216 "strip_size_kb": 0, 00:15:13.216 "state": "configuring", 00:15:13.216 "raid_level": "raid1", 00:15:13.216 "superblock": false, 00:15:13.216 "num_base_bdevs": 2, 00:15:13.216 "num_base_bdevs_discovered": 1, 00:15:13.216 "num_base_bdevs_operational": 2, 00:15:13.216 "base_bdevs_list": [ 00:15:13.216 { 00:15:13.216 "name": "BaseBdev1", 00:15:13.216 "uuid": "1e3f17cc-0918-4ac6-af87-e326093baa17", 00:15:13.216 "is_configured": true, 00:15:13.216 "data_offset": 0, 00:15:13.216 "data_size": 65536 00:15:13.216 }, 00:15:13.216 { 00:15:13.216 "name": "BaseBdev2", 00:15:13.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.216 "is_configured": false, 00:15:13.216 "data_offset": 0, 00:15:13.216 "data_size": 0 00:15:13.216 } 00:15:13.216 ] 00:15:13.216 }' 00:15:13.216 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.217 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.474 [2024-11-06 09:08:12.479428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.474 [2024-11-06 09:08:12.479483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.474 [2024-11-06 09:08:12.491477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.474 [2024-11-06 09:08:12.493760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.474 [2024-11-06 09:08:12.493942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.474 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.732 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.732 "name": "Existed_Raid", 00:15:13.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.732 "strip_size_kb": 0, 00:15:13.732 "state": "configuring", 00:15:13.732 "raid_level": "raid1", 00:15:13.732 "superblock": false, 00:15:13.732 "num_base_bdevs": 2, 00:15:13.732 "num_base_bdevs_discovered": 1, 00:15:13.732 "num_base_bdevs_operational": 2, 00:15:13.732 "base_bdevs_list": [ 00:15:13.732 { 00:15:13.732 "name": "BaseBdev1", 00:15:13.732 "uuid": "1e3f17cc-0918-4ac6-af87-e326093baa17", 00:15:13.732 "is_configured": true, 00:15:13.732 "data_offset": 0, 00:15:13.732 "data_size": 65536 00:15:13.732 }, 00:15:13.732 { 00:15:13.732 "name": "BaseBdev2", 00:15:13.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.732 "is_configured": false, 00:15:13.732 "data_offset": 0, 00:15:13.732 "data_size": 0 00:15:13.732 } 00:15:13.732 ] 00:15:13.732 }' 00:15:13.732 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.732 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.991 [2024-11-06 09:08:12.958000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.991 [2024-11-06 09:08:12.958070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:13.991 [2024-11-06 09:08:12.958081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:13.991 [2024-11-06 09:08:12.958417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:13.991 [2024-11-06 09:08:12.958597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:13.991 [2024-11-06 09:08:12.958615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:13.991 [2024-11-06 09:08:12.958915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.991 BaseBdev2 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:13.991 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:13.992 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.992 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.992 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.992 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.992 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.992 09:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.992 [ 00:15:13.992 { 00:15:13.992 "name": "BaseBdev2", 00:15:13.992 "aliases": [ 00:15:13.992 "fd62c3e9-47d8-440a-998d-573e42b804f9" 00:15:13.992 ], 00:15:13.992 "product_name": "Malloc disk", 00:15:13.992 "block_size": 512, 00:15:13.992 "num_blocks": 65536, 00:15:13.992 "uuid": "fd62c3e9-47d8-440a-998d-573e42b804f9", 00:15:13.992 "assigned_rate_limits": { 00:15:13.992 "rw_ios_per_sec": 0, 00:15:13.992 "rw_mbytes_per_sec": 0, 00:15:13.992 "r_mbytes_per_sec": 0, 00:15:13.992 "w_mbytes_per_sec": 0 00:15:13.992 }, 00:15:13.992 "claimed": true, 00:15:13.992 "claim_type": "exclusive_write", 00:15:13.992 "zoned": false, 00:15:13.992 "supported_io_types": { 00:15:13.992 "read": true, 00:15:13.992 "write": true, 00:15:13.992 "unmap": true, 00:15:13.992 "flush": true, 00:15:13.992 "reset": true, 00:15:13.992 "nvme_admin": false, 00:15:13.992 "nvme_io": false, 00:15:13.992 "nvme_io_md": false, 00:15:13.992 "write_zeroes": true, 00:15:13.992 "zcopy": true, 00:15:13.992 "get_zone_info": false, 00:15:13.992 "zone_management": false, 00:15:13.992 "zone_append": false, 00:15:13.992 "compare": false, 00:15:13.992 "compare_and_write": false, 00:15:13.992 "abort": true, 00:15:13.992 "seek_hole": false, 00:15:13.992 "seek_data": false, 00:15:13.992 "copy": true, 00:15:13.992 "nvme_iov_md": false 00:15:13.992 }, 00:15:13.992 "memory_domains": [ 00:15:13.992 { 00:15:13.992 "dma_device_id": "system", 00:15:13.992 "dma_device_type": 1 00:15:13.992 }, 00:15:13.992 { 00:15:13.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.992 "dma_device_type": 2 00:15:13.992 } 00:15:13.992 ], 00:15:13.992 "driver_specific": {} 00:15:13.992 } 00:15:13.992 ] 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.992 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.251 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.251 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.251 "name": "Existed_Raid", 00:15:14.251 "uuid": "3d03294d-9a6b-48dc-bcb2-d253295e5614", 00:15:14.251 "strip_size_kb": 0, 00:15:14.251 "state": "online", 00:15:14.251 "raid_level": "raid1", 00:15:14.251 "superblock": false, 00:15:14.251 "num_base_bdevs": 2, 00:15:14.251 "num_base_bdevs_discovered": 2, 00:15:14.251 "num_base_bdevs_operational": 2, 00:15:14.251 "base_bdevs_list": [ 00:15:14.251 { 00:15:14.251 "name": "BaseBdev1", 00:15:14.251 "uuid": "1e3f17cc-0918-4ac6-af87-e326093baa17", 00:15:14.251 "is_configured": true, 00:15:14.251 "data_offset": 0, 00:15:14.251 "data_size": 65536 00:15:14.251 }, 00:15:14.251 { 00:15:14.251 "name": "BaseBdev2", 00:15:14.251 "uuid": "fd62c3e9-47d8-440a-998d-573e42b804f9", 00:15:14.251 "is_configured": true, 00:15:14.251 "data_offset": 0, 00:15:14.251 "data_size": 65536 00:15:14.251 } 00:15:14.251 ] 00:15:14.251 }' 00:15:14.251 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.251 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.509 [2024-11-06 09:08:13.445972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.509 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:14.509 "name": "Existed_Raid", 00:15:14.509 "aliases": [ 00:15:14.509 "3d03294d-9a6b-48dc-bcb2-d253295e5614" 00:15:14.509 ], 00:15:14.509 "product_name": "Raid Volume", 00:15:14.509 "block_size": 512, 00:15:14.509 "num_blocks": 65536, 00:15:14.510 "uuid": "3d03294d-9a6b-48dc-bcb2-d253295e5614", 00:15:14.510 "assigned_rate_limits": { 00:15:14.510 "rw_ios_per_sec": 0, 00:15:14.510 "rw_mbytes_per_sec": 0, 00:15:14.510 "r_mbytes_per_sec": 0, 00:15:14.510 "w_mbytes_per_sec": 0 00:15:14.510 }, 00:15:14.510 "claimed": false, 00:15:14.510 "zoned": false, 00:15:14.510 "supported_io_types": { 00:15:14.510 "read": true, 00:15:14.510 "write": true, 00:15:14.510 "unmap": false, 00:15:14.510 "flush": false, 00:15:14.510 "reset": true, 00:15:14.510 "nvme_admin": false, 00:15:14.510 "nvme_io": false, 00:15:14.510 "nvme_io_md": false, 00:15:14.510 "write_zeroes": true, 00:15:14.510 "zcopy": false, 00:15:14.510 "get_zone_info": false, 00:15:14.510 "zone_management": false, 00:15:14.510 "zone_append": false, 00:15:14.510 "compare": false, 00:15:14.510 "compare_and_write": false, 00:15:14.510 "abort": false, 00:15:14.510 "seek_hole": false, 00:15:14.510 "seek_data": false, 00:15:14.510 "copy": false, 00:15:14.510 "nvme_iov_md": false 00:15:14.510 }, 00:15:14.510 "memory_domains": [ 00:15:14.510 { 00:15:14.510 "dma_device_id": "system", 00:15:14.510 "dma_device_type": 1 00:15:14.510 }, 00:15:14.510 { 00:15:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.510 "dma_device_type": 2 00:15:14.510 }, 00:15:14.510 { 00:15:14.510 "dma_device_id": "system", 00:15:14.510 "dma_device_type": 1 00:15:14.510 }, 00:15:14.510 { 00:15:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.510 "dma_device_type": 2 00:15:14.510 } 00:15:14.510 ], 00:15:14.510 "driver_specific": { 00:15:14.510 "raid": { 00:15:14.510 "uuid": "3d03294d-9a6b-48dc-bcb2-d253295e5614", 00:15:14.510 "strip_size_kb": 0, 00:15:14.510 "state": "online", 00:15:14.510 "raid_level": "raid1", 00:15:14.510 "superblock": false, 00:15:14.510 "num_base_bdevs": 2, 00:15:14.510 "num_base_bdevs_discovered": 2, 00:15:14.510 "num_base_bdevs_operational": 2, 00:15:14.510 "base_bdevs_list": [ 00:15:14.510 { 00:15:14.510 "name": "BaseBdev1", 00:15:14.510 "uuid": "1e3f17cc-0918-4ac6-af87-e326093baa17", 00:15:14.510 "is_configured": true, 00:15:14.510 "data_offset": 0, 00:15:14.510 "data_size": 65536 00:15:14.510 }, 00:15:14.510 { 00:15:14.510 "name": "BaseBdev2", 00:15:14.510 "uuid": "fd62c3e9-47d8-440a-998d-573e42b804f9", 00:15:14.510 "is_configured": true, 00:15:14.510 "data_offset": 0, 00:15:14.510 "data_size": 65536 00:15:14.510 } 00:15:14.510 ] 00:15:14.510 } 00:15:14.510 } 00:15:14.510 }' 00:15:14.510 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.510 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:14.510 BaseBdev2' 00:15:14.510 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.510 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:14.510 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.768 [2024-11-06 09:08:13.633714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.768 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.769 "name": "Existed_Raid", 00:15:14.769 "uuid": "3d03294d-9a6b-48dc-bcb2-d253295e5614", 00:15:14.769 "strip_size_kb": 0, 00:15:14.769 "state": "online", 00:15:14.769 "raid_level": "raid1", 00:15:14.769 "superblock": false, 00:15:14.769 "num_base_bdevs": 2, 00:15:14.769 "num_base_bdevs_discovered": 1, 00:15:14.769 "num_base_bdevs_operational": 1, 00:15:14.769 "base_bdevs_list": [ 00:15:14.769 { 00:15:14.769 "name": null, 00:15:14.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.769 "is_configured": false, 00:15:14.769 "data_offset": 0, 00:15:14.769 "data_size": 65536 00:15:14.769 }, 00:15:14.769 { 00:15:14.769 "name": "BaseBdev2", 00:15:14.769 "uuid": "fd62c3e9-47d8-440a-998d-573e42b804f9", 00:15:14.769 "is_configured": true, 00:15:14.769 "data_offset": 0, 00:15:14.769 "data_size": 65536 00:15:14.769 } 00:15:14.769 ] 00:15:14.769 }' 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.769 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 [2024-11-06 09:08:14.226995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.336 [2024-11-06 09:08:14.227235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.336 [2024-11-06 09:08:14.330310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.336 [2024-11-06 09:08:14.330374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.336 [2024-11-06 09:08:14.330390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:15.336 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62485 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62485 ']' 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62485 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62485 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:15.594 killing process with pid 62485 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62485' 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62485 00:15:15.594 [2024-11-06 09:08:14.426148] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.594 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62485 00:15:15.594 [2024-11-06 09:08:14.445297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:16.970 00:15:16.970 real 0m5.173s 00:15:16.970 user 0m7.514s 00:15:16.970 sys 0m0.875s 00:15:16.970 ************************************ 00:15:16.970 END TEST raid_state_function_test 00:15:16.970 ************************************ 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.970 09:08:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:16.970 09:08:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:16.970 09:08:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:16.970 09:08:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.970 ************************************ 00:15:16.970 START TEST raid_state_function_test_sb 00:15:16.970 ************************************ 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62738 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:16.970 Process raid pid: 62738 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62738' 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62738 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62738 ']' 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:16.970 09:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.970 [2024-11-06 09:08:15.769380] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:16.970 [2024-11-06 09:08:15.769530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.970 [2024-11-06 09:08:15.955844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.228 [2024-11-06 09:08:16.088224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.486 [2024-11-06 09:08:16.310767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.486 [2024-11-06 09:08:16.310817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 [2024-11-06 09:08:16.661483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.774 [2024-11-06 09:08:16.661543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.774 [2024-11-06 09:08:16.661564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.774 [2024-11-06 09:08:16.661579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.774 "name": "Existed_Raid", 00:15:17.774 "uuid": "76287fe4-f34f-47b7-9e64-794ffea35086", 00:15:17.774 "strip_size_kb": 0, 00:15:17.774 "state": "configuring", 00:15:17.774 "raid_level": "raid1", 00:15:17.774 "superblock": true, 00:15:17.774 "num_base_bdevs": 2, 00:15:17.774 "num_base_bdevs_discovered": 0, 00:15:17.774 "num_base_bdevs_operational": 2, 00:15:17.774 "base_bdevs_list": [ 00:15:17.774 { 00:15:17.774 "name": "BaseBdev1", 00:15:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.774 "is_configured": false, 00:15:17.774 "data_offset": 0, 00:15:17.774 "data_size": 0 00:15:17.774 }, 00:15:17.774 { 00:15:17.774 "name": "BaseBdev2", 00:15:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.774 "is_configured": false, 00:15:17.774 "data_offset": 0, 00:15:17.774 "data_size": 0 00:15:17.774 } 00:15:17.774 ] 00:15:17.774 }' 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.774 09:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.339 [2024-11-06 09:08:17.121451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.339 [2024-11-06 09:08:17.121491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.339 [2024-11-06 09:08:17.133442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.339 [2024-11-06 09:08:17.133491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.339 [2024-11-06 09:08:17.133503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.339 [2024-11-06 09:08:17.133520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.339 [2024-11-06 09:08:17.182756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.339 BaseBdev1 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.339 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.339 [ 00:15:18.339 { 00:15:18.339 "name": "BaseBdev1", 00:15:18.339 "aliases": [ 00:15:18.339 "df3f855c-dc20-43f0-9db9-05cbf86827fc" 00:15:18.339 ], 00:15:18.339 "product_name": "Malloc disk", 00:15:18.339 "block_size": 512, 00:15:18.339 "num_blocks": 65536, 00:15:18.339 "uuid": "df3f855c-dc20-43f0-9db9-05cbf86827fc", 00:15:18.339 "assigned_rate_limits": { 00:15:18.339 "rw_ios_per_sec": 0, 00:15:18.339 "rw_mbytes_per_sec": 0, 00:15:18.339 "r_mbytes_per_sec": 0, 00:15:18.339 "w_mbytes_per_sec": 0 00:15:18.339 }, 00:15:18.339 "claimed": true, 00:15:18.339 "claim_type": "exclusive_write", 00:15:18.339 "zoned": false, 00:15:18.339 "supported_io_types": { 00:15:18.339 "read": true, 00:15:18.339 "write": true, 00:15:18.339 "unmap": true, 00:15:18.339 "flush": true, 00:15:18.339 "reset": true, 00:15:18.339 "nvme_admin": false, 00:15:18.339 "nvme_io": false, 00:15:18.339 "nvme_io_md": false, 00:15:18.339 "write_zeroes": true, 00:15:18.339 "zcopy": true, 00:15:18.339 "get_zone_info": false, 00:15:18.339 "zone_management": false, 00:15:18.339 "zone_append": false, 00:15:18.339 "compare": false, 00:15:18.339 "compare_and_write": false, 00:15:18.339 "abort": true, 00:15:18.339 "seek_hole": false, 00:15:18.339 "seek_data": false, 00:15:18.339 "copy": true, 00:15:18.339 "nvme_iov_md": false 00:15:18.339 }, 00:15:18.339 "memory_domains": [ 00:15:18.339 { 00:15:18.339 "dma_device_id": "system", 00:15:18.339 "dma_device_type": 1 00:15:18.339 }, 00:15:18.339 { 00:15:18.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.339 "dma_device_type": 2 00:15:18.339 } 00:15:18.340 ], 00:15:18.340 "driver_specific": {} 00:15:18.340 } 00:15:18.340 ] 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.340 "name": "Existed_Raid", 00:15:18.340 "uuid": "ea5665d9-d3dd-443c-9acf-ffebcd268907", 00:15:18.340 "strip_size_kb": 0, 00:15:18.340 "state": "configuring", 00:15:18.340 "raid_level": "raid1", 00:15:18.340 "superblock": true, 00:15:18.340 "num_base_bdevs": 2, 00:15:18.340 "num_base_bdevs_discovered": 1, 00:15:18.340 "num_base_bdevs_operational": 2, 00:15:18.340 "base_bdevs_list": [ 00:15:18.340 { 00:15:18.340 "name": "BaseBdev1", 00:15:18.340 "uuid": "df3f855c-dc20-43f0-9db9-05cbf86827fc", 00:15:18.340 "is_configured": true, 00:15:18.340 "data_offset": 2048, 00:15:18.340 "data_size": 63488 00:15:18.340 }, 00:15:18.340 { 00:15:18.340 "name": "BaseBdev2", 00:15:18.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.340 "is_configured": false, 00:15:18.340 "data_offset": 0, 00:15:18.340 "data_size": 0 00:15:18.340 } 00:15:18.340 ] 00:15:18.340 }' 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.340 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 [2024-11-06 09:08:17.622430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.597 [2024-11-06 09:08:17.622488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.597 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 [2024-11-06 09:08:17.634499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.854 [2024-11-06 09:08:17.636973] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.854 [2024-11-06 09:08:17.637162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.854 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.854 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:18.854 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.855 "name": "Existed_Raid", 00:15:18.855 "uuid": "2ecceee2-89fa-4b02-a5b8-a6f7da3d38f0", 00:15:18.855 "strip_size_kb": 0, 00:15:18.855 "state": "configuring", 00:15:18.855 "raid_level": "raid1", 00:15:18.855 "superblock": true, 00:15:18.855 "num_base_bdevs": 2, 00:15:18.855 "num_base_bdevs_discovered": 1, 00:15:18.855 "num_base_bdevs_operational": 2, 00:15:18.855 "base_bdevs_list": [ 00:15:18.855 { 00:15:18.855 "name": "BaseBdev1", 00:15:18.855 "uuid": "df3f855c-dc20-43f0-9db9-05cbf86827fc", 00:15:18.855 "is_configured": true, 00:15:18.855 "data_offset": 2048, 00:15:18.855 "data_size": 63488 00:15:18.855 }, 00:15:18.855 { 00:15:18.855 "name": "BaseBdev2", 00:15:18.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.855 "is_configured": false, 00:15:18.855 "data_offset": 0, 00:15:18.855 "data_size": 0 00:15:18.855 } 00:15:18.855 ] 00:15:18.855 }' 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.855 09:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.112 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:19.112 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.112 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 [2024-11-06 09:08:18.167405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.370 BaseBdev2 00:15:19.370 [2024-11-06 09:08:18.167869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:19.370 [2024-11-06 09:08:18.167893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.370 [2024-11-06 09:08:18.168193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:19.370 [2024-11-06 09:08:18.168367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:19.370 [2024-11-06 09:08:18.168393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:19.370 [2024-11-06 09:08:18.168565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 [ 00:15:19.370 { 00:15:19.370 "name": "BaseBdev2", 00:15:19.370 "aliases": [ 00:15:19.370 "33d9bbd8-86bc-4bd8-8c93-61068a6e81d9" 00:15:19.370 ], 00:15:19.370 "product_name": "Malloc disk", 00:15:19.370 "block_size": 512, 00:15:19.370 "num_blocks": 65536, 00:15:19.370 "uuid": "33d9bbd8-86bc-4bd8-8c93-61068a6e81d9", 00:15:19.370 "assigned_rate_limits": { 00:15:19.370 "rw_ios_per_sec": 0, 00:15:19.370 "rw_mbytes_per_sec": 0, 00:15:19.370 "r_mbytes_per_sec": 0, 00:15:19.370 "w_mbytes_per_sec": 0 00:15:19.370 }, 00:15:19.370 "claimed": true, 00:15:19.370 "claim_type": "exclusive_write", 00:15:19.370 "zoned": false, 00:15:19.370 "supported_io_types": { 00:15:19.370 "read": true, 00:15:19.370 "write": true, 00:15:19.370 "unmap": true, 00:15:19.370 "flush": true, 00:15:19.370 "reset": true, 00:15:19.370 "nvme_admin": false, 00:15:19.370 "nvme_io": false, 00:15:19.370 "nvme_io_md": false, 00:15:19.370 "write_zeroes": true, 00:15:19.370 "zcopy": true, 00:15:19.370 "get_zone_info": false, 00:15:19.370 "zone_management": false, 00:15:19.370 "zone_append": false, 00:15:19.370 "compare": false, 00:15:19.370 "compare_and_write": false, 00:15:19.370 "abort": true, 00:15:19.370 "seek_hole": false, 00:15:19.370 "seek_data": false, 00:15:19.370 "copy": true, 00:15:19.370 "nvme_iov_md": false 00:15:19.370 }, 00:15:19.370 "memory_domains": [ 00:15:19.370 { 00:15:19.370 "dma_device_id": "system", 00:15:19.370 "dma_device_type": 1 00:15:19.370 }, 00:15:19.370 { 00:15:19.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.370 "dma_device_type": 2 00:15:19.370 } 00:15:19.370 ], 00:15:19.370 "driver_specific": {} 00:15:19.370 } 00:15:19.370 ] 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.370 "name": "Existed_Raid", 00:15:19.370 "uuid": "2ecceee2-89fa-4b02-a5b8-a6f7da3d38f0", 00:15:19.370 "strip_size_kb": 0, 00:15:19.370 "state": "online", 00:15:19.370 "raid_level": "raid1", 00:15:19.370 "superblock": true, 00:15:19.370 "num_base_bdevs": 2, 00:15:19.370 "num_base_bdevs_discovered": 2, 00:15:19.370 "num_base_bdevs_operational": 2, 00:15:19.370 "base_bdevs_list": [ 00:15:19.370 { 00:15:19.370 "name": "BaseBdev1", 00:15:19.370 "uuid": "df3f855c-dc20-43f0-9db9-05cbf86827fc", 00:15:19.370 "is_configured": true, 00:15:19.370 "data_offset": 2048, 00:15:19.370 "data_size": 63488 00:15:19.370 }, 00:15:19.370 { 00:15:19.370 "name": "BaseBdev2", 00:15:19.370 "uuid": "33d9bbd8-86bc-4bd8-8c93-61068a6e81d9", 00:15:19.370 "is_configured": true, 00:15:19.370 "data_offset": 2048, 00:15:19.370 "data_size": 63488 00:15:19.370 } 00:15:19.370 ] 00:15:19.370 }' 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.370 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 [2024-11-06 09:08:18.695639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.936 "name": "Existed_Raid", 00:15:19.936 "aliases": [ 00:15:19.936 "2ecceee2-89fa-4b02-a5b8-a6f7da3d38f0" 00:15:19.936 ], 00:15:19.936 "product_name": "Raid Volume", 00:15:19.936 "block_size": 512, 00:15:19.936 "num_blocks": 63488, 00:15:19.936 "uuid": "2ecceee2-89fa-4b02-a5b8-a6f7da3d38f0", 00:15:19.936 "assigned_rate_limits": { 00:15:19.936 "rw_ios_per_sec": 0, 00:15:19.936 "rw_mbytes_per_sec": 0, 00:15:19.936 "r_mbytes_per_sec": 0, 00:15:19.936 "w_mbytes_per_sec": 0 00:15:19.936 }, 00:15:19.936 "claimed": false, 00:15:19.936 "zoned": false, 00:15:19.936 "supported_io_types": { 00:15:19.936 "read": true, 00:15:19.936 "write": true, 00:15:19.936 "unmap": false, 00:15:19.936 "flush": false, 00:15:19.936 "reset": true, 00:15:19.936 "nvme_admin": false, 00:15:19.936 "nvme_io": false, 00:15:19.936 "nvme_io_md": false, 00:15:19.936 "write_zeroes": true, 00:15:19.936 "zcopy": false, 00:15:19.936 "get_zone_info": false, 00:15:19.936 "zone_management": false, 00:15:19.936 "zone_append": false, 00:15:19.936 "compare": false, 00:15:19.936 "compare_and_write": false, 00:15:19.936 "abort": false, 00:15:19.936 "seek_hole": false, 00:15:19.936 "seek_data": false, 00:15:19.936 "copy": false, 00:15:19.936 "nvme_iov_md": false 00:15:19.936 }, 00:15:19.936 "memory_domains": [ 00:15:19.936 { 00:15:19.936 "dma_device_id": "system", 00:15:19.936 "dma_device_type": 1 00:15:19.936 }, 00:15:19.936 { 00:15:19.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.936 "dma_device_type": 2 00:15:19.936 }, 00:15:19.936 { 00:15:19.936 "dma_device_id": "system", 00:15:19.936 "dma_device_type": 1 00:15:19.936 }, 00:15:19.936 { 00:15:19.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.936 "dma_device_type": 2 00:15:19.936 } 00:15:19.936 ], 00:15:19.936 "driver_specific": { 00:15:19.936 "raid": { 00:15:19.936 "uuid": "2ecceee2-89fa-4b02-a5b8-a6f7da3d38f0", 00:15:19.936 "strip_size_kb": 0, 00:15:19.936 "state": "online", 00:15:19.936 "raid_level": "raid1", 00:15:19.936 "superblock": true, 00:15:19.936 "num_base_bdevs": 2, 00:15:19.936 "num_base_bdevs_discovered": 2, 00:15:19.936 "num_base_bdevs_operational": 2, 00:15:19.936 "base_bdevs_list": [ 00:15:19.936 { 00:15:19.936 "name": "BaseBdev1", 00:15:19.936 "uuid": "df3f855c-dc20-43f0-9db9-05cbf86827fc", 00:15:19.936 "is_configured": true, 00:15:19.936 "data_offset": 2048, 00:15:19.936 "data_size": 63488 00:15:19.936 }, 00:15:19.936 { 00:15:19.936 "name": "BaseBdev2", 00:15:19.936 "uuid": "33d9bbd8-86bc-4bd8-8c93-61068a6e81d9", 00:15:19.936 "is_configured": true, 00:15:19.936 "data_offset": 2048, 00:15:19.936 "data_size": 63488 00:15:19.936 } 00:15:19.936 ] 00:15:19.936 } 00:15:19.936 } 00:15:19.936 }' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:19.936 BaseBdev2' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.936 09:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 [2024-11-06 09:08:18.951030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.194 "name": "Existed_Raid", 00:15:20.194 "uuid": "2ecceee2-89fa-4b02-a5b8-a6f7da3d38f0", 00:15:20.194 "strip_size_kb": 0, 00:15:20.194 "state": "online", 00:15:20.194 "raid_level": "raid1", 00:15:20.194 "superblock": true, 00:15:20.194 "num_base_bdevs": 2, 00:15:20.194 "num_base_bdevs_discovered": 1, 00:15:20.194 "num_base_bdevs_operational": 1, 00:15:20.194 "base_bdevs_list": [ 00:15:20.194 { 00:15:20.194 "name": null, 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 63488 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev2", 00:15:20.194 "uuid": "33d9bbd8-86bc-4bd8-8c93-61068a6e81d9", 00:15:20.194 "is_configured": true, 00:15:20.194 "data_offset": 2048, 00:15:20.194 "data_size": 63488 00:15:20.194 } 00:15:20.194 ] 00:15:20.194 }' 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.194 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:20.451 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.709 [2024-11-06 09:08:19.509716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.709 [2024-11-06 09:08:19.509823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.709 [2024-11-06 09:08:19.611158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.709 [2024-11-06 09:08:19.611428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.709 [2024-11-06 09:08:19.611562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62738 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62738 ']' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62738 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62738 00:15:20.709 killing process with pid 62738 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62738' 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62738 00:15:20.709 [2024-11-06 09:08:19.701976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.709 09:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62738 00:15:20.709 [2024-11-06 09:08:19.719781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.116 09:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:22.116 00:15:22.116 real 0m5.209s 00:15:22.116 user 0m7.539s 00:15:22.116 sys 0m0.886s 00:15:22.116 09:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.116 09:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.116 ************************************ 00:15:22.116 END TEST raid_state_function_test_sb 00:15:22.116 ************************************ 00:15:22.116 09:08:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:22.116 09:08:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:22.116 09:08:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.116 09:08:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.116 ************************************ 00:15:22.116 START TEST raid_superblock_test 00:15:22.116 ************************************ 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:22.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62985 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62985 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62985 ']' 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.116 09:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:22.116 [2024-11-06 09:08:21.033813] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:22.116 [2024-11-06 09:08:21.034199] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62985 ] 00:15:22.374 [2024-11-06 09:08:21.230170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.374 [2024-11-06 09:08:21.353521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.635 [2024-11-06 09:08:21.570302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.635 [2024-11-06 09:08:21.570371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.892 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 malloc1 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 [2024-11-06 09:08:21.949475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:23.150 [2024-11-06 09:08:21.949574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.150 [2024-11-06 09:08:21.949607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.150 [2024-11-06 09:08:21.949626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.150 [2024-11-06 09:08:21.952253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.150 [2024-11-06 09:08:21.952473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:23.150 pt1 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.150 09:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 malloc2 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 [2024-11-06 09:08:22.009470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:23.150 [2024-11-06 09:08:22.009671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.150 [2024-11-06 09:08:22.009736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:23.150 [2024-11-06 09:08:22.009825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.150 [2024-11-06 09:08:22.012435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.150 [2024-11-06 09:08:22.012578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:23.150 pt2 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 [2024-11-06 09:08:22.021521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.150 [2024-11-06 09:08:22.023879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.150 [2024-11-06 09:08:22.024183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:23.150 [2024-11-06 09:08:22.024209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.150 [2024-11-06 09:08:22.024512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:23.150 [2024-11-06 09:08:22.024693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:23.150 [2024-11-06 09:08:22.024712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:23.150 [2024-11-06 09:08:22.024875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.150 "name": "raid_bdev1", 00:15:23.150 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:23.150 "strip_size_kb": 0, 00:15:23.150 "state": "online", 00:15:23.150 "raid_level": "raid1", 00:15:23.150 "superblock": true, 00:15:23.150 "num_base_bdevs": 2, 00:15:23.150 "num_base_bdevs_discovered": 2, 00:15:23.150 "num_base_bdevs_operational": 2, 00:15:23.150 "base_bdevs_list": [ 00:15:23.150 { 00:15:23.150 "name": "pt1", 00:15:23.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.150 "is_configured": true, 00:15:23.150 "data_offset": 2048, 00:15:23.150 "data_size": 63488 00:15:23.150 }, 00:15:23.150 { 00:15:23.150 "name": "pt2", 00:15:23.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.150 "is_configured": true, 00:15:23.150 "data_offset": 2048, 00:15:23.150 "data_size": 63488 00:15:23.150 } 00:15:23.150 ] 00:15:23.150 }' 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.150 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.716 [2024-11-06 09:08:22.501771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.716 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.716 "name": "raid_bdev1", 00:15:23.716 "aliases": [ 00:15:23.716 "5bf23fd0-1787-4468-93b7-ae8d15e448b5" 00:15:23.716 ], 00:15:23.716 "product_name": "Raid Volume", 00:15:23.716 "block_size": 512, 00:15:23.716 "num_blocks": 63488, 00:15:23.716 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:23.716 "assigned_rate_limits": { 00:15:23.716 "rw_ios_per_sec": 0, 00:15:23.716 "rw_mbytes_per_sec": 0, 00:15:23.716 "r_mbytes_per_sec": 0, 00:15:23.716 "w_mbytes_per_sec": 0 00:15:23.716 }, 00:15:23.716 "claimed": false, 00:15:23.716 "zoned": false, 00:15:23.716 "supported_io_types": { 00:15:23.716 "read": true, 00:15:23.716 "write": true, 00:15:23.716 "unmap": false, 00:15:23.716 "flush": false, 00:15:23.716 "reset": true, 00:15:23.716 "nvme_admin": false, 00:15:23.716 "nvme_io": false, 00:15:23.716 "nvme_io_md": false, 00:15:23.716 "write_zeroes": true, 00:15:23.716 "zcopy": false, 00:15:23.716 "get_zone_info": false, 00:15:23.716 "zone_management": false, 00:15:23.716 "zone_append": false, 00:15:23.716 "compare": false, 00:15:23.716 "compare_and_write": false, 00:15:23.716 "abort": false, 00:15:23.716 "seek_hole": false, 00:15:23.716 "seek_data": false, 00:15:23.716 "copy": false, 00:15:23.716 "nvme_iov_md": false 00:15:23.716 }, 00:15:23.716 "memory_domains": [ 00:15:23.716 { 00:15:23.716 "dma_device_id": "system", 00:15:23.716 "dma_device_type": 1 00:15:23.716 }, 00:15:23.716 { 00:15:23.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.716 "dma_device_type": 2 00:15:23.716 }, 00:15:23.716 { 00:15:23.716 "dma_device_id": "system", 00:15:23.716 "dma_device_type": 1 00:15:23.716 }, 00:15:23.716 { 00:15:23.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.717 "dma_device_type": 2 00:15:23.717 } 00:15:23.717 ], 00:15:23.717 "driver_specific": { 00:15:23.717 "raid": { 00:15:23.717 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:23.717 "strip_size_kb": 0, 00:15:23.717 "state": "online", 00:15:23.717 "raid_level": "raid1", 00:15:23.717 "superblock": true, 00:15:23.717 "num_base_bdevs": 2, 00:15:23.717 "num_base_bdevs_discovered": 2, 00:15:23.717 "num_base_bdevs_operational": 2, 00:15:23.717 "base_bdevs_list": [ 00:15:23.717 { 00:15:23.717 "name": "pt1", 00:15:23.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.717 "is_configured": true, 00:15:23.717 "data_offset": 2048, 00:15:23.717 "data_size": 63488 00:15:23.717 }, 00:15:23.717 { 00:15:23.717 "name": "pt2", 00:15:23.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.717 "is_configured": true, 00:15:23.717 "data_offset": 2048, 00:15:23.717 "data_size": 63488 00:15:23.717 } 00:15:23.717 ] 00:15:23.717 } 00:15:23.717 } 00:15:23.717 }' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:23.717 pt2' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:23.717 [2024-11-06 09:08:22.725727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.717 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5bf23fd0-1787-4468-93b7-ae8d15e448b5 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5bf23fd0-1787-4468-93b7-ae8d15e448b5 ']' 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.975 [2024-11-06 09:08:22.765398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.975 [2024-11-06 09:08:22.765428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.975 [2024-11-06 09:08:22.765522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.975 [2024-11-06 09:08:22.765603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.975 [2024-11-06 09:08:22.765621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.975 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 [2024-11-06 09:08:22.901310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:23.976 [2024-11-06 09:08:22.903700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:23.976 [2024-11-06 09:08:22.903899] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:23.976 [2024-11-06 09:08:22.904111] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:23.976 [2024-11-06 09:08:22.904240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.976 [2024-11-06 09:08:22.904291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:23.976 request: 00:15:23.976 { 00:15:23.976 "name": "raid_bdev1", 00:15:23.976 "raid_level": "raid1", 00:15:23.976 "base_bdevs": [ 00:15:23.976 "malloc1", 00:15:23.976 "malloc2" 00:15:23.976 ], 00:15:23.976 "superblock": false, 00:15:23.976 "method": "bdev_raid_create", 00:15:23.976 "req_id": 1 00:15:23.976 } 00:15:23.976 Got JSON-RPC error response 00:15:23.976 response: 00:15:23.976 { 00:15:23.976 "code": -17, 00:15:23.976 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:23.976 } 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 [2024-11-06 09:08:22.969174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:23.976 [2024-11-06 09:08:22.969244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.976 [2024-11-06 09:08:22.969265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:23.976 [2024-11-06 09:08:22.969296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.976 [2024-11-06 09:08:22.971934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.976 [2024-11-06 09:08:22.972091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:23.976 [2024-11-06 09:08:22.972200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:23.976 [2024-11-06 09:08:22.972295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.976 pt1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 09:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.245 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.245 "name": "raid_bdev1", 00:15:24.245 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:24.245 "strip_size_kb": 0, 00:15:24.245 "state": "configuring", 00:15:24.245 "raid_level": "raid1", 00:15:24.245 "superblock": true, 00:15:24.245 "num_base_bdevs": 2, 00:15:24.246 "num_base_bdevs_discovered": 1, 00:15:24.246 "num_base_bdevs_operational": 2, 00:15:24.246 "base_bdevs_list": [ 00:15:24.246 { 00:15:24.246 "name": "pt1", 00:15:24.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.246 "is_configured": true, 00:15:24.246 "data_offset": 2048, 00:15:24.246 "data_size": 63488 00:15:24.246 }, 00:15:24.246 { 00:15:24.246 "name": null, 00:15:24.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.246 "is_configured": false, 00:15:24.246 "data_offset": 2048, 00:15:24.246 "data_size": 63488 00:15:24.246 } 00:15:24.246 ] 00:15:24.246 }' 00:15:24.246 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.246 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.524 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:24.524 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:24.524 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:24.524 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.524 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.524 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.524 [2024-11-06 09:08:23.360645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.524 [2024-11-06 09:08:23.360726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.524 [2024-11-06 09:08:23.360752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:24.524 [2024-11-06 09:08:23.360769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.524 [2024-11-06 09:08:23.361303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.524 [2024-11-06 09:08:23.361330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.525 [2024-11-06 09:08:23.361424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:24.525 [2024-11-06 09:08:23.361451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.525 [2024-11-06 09:08:23.361601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:24.525 [2024-11-06 09:08:23.361615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:24.525 [2024-11-06 09:08:23.361881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:24.525 [2024-11-06 09:08:23.362037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:24.525 [2024-11-06 09:08:23.362048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:24.525 [2024-11-06 09:08:23.362201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.525 pt2 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.525 "name": "raid_bdev1", 00:15:24.525 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:24.525 "strip_size_kb": 0, 00:15:24.525 "state": "online", 00:15:24.525 "raid_level": "raid1", 00:15:24.525 "superblock": true, 00:15:24.525 "num_base_bdevs": 2, 00:15:24.525 "num_base_bdevs_discovered": 2, 00:15:24.525 "num_base_bdevs_operational": 2, 00:15:24.525 "base_bdevs_list": [ 00:15:24.525 { 00:15:24.525 "name": "pt1", 00:15:24.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.525 "is_configured": true, 00:15:24.525 "data_offset": 2048, 00:15:24.525 "data_size": 63488 00:15:24.525 }, 00:15:24.525 { 00:15:24.525 "name": "pt2", 00:15:24.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.525 "is_configured": true, 00:15:24.525 "data_offset": 2048, 00:15:24.525 "data_size": 63488 00:15:24.525 } 00:15:24.525 ] 00:15:24.525 }' 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.525 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.783 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.783 [2024-11-06 09:08:23.792316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.041 "name": "raid_bdev1", 00:15:25.041 "aliases": [ 00:15:25.041 "5bf23fd0-1787-4468-93b7-ae8d15e448b5" 00:15:25.041 ], 00:15:25.041 "product_name": "Raid Volume", 00:15:25.041 "block_size": 512, 00:15:25.041 "num_blocks": 63488, 00:15:25.041 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:25.041 "assigned_rate_limits": { 00:15:25.041 "rw_ios_per_sec": 0, 00:15:25.041 "rw_mbytes_per_sec": 0, 00:15:25.041 "r_mbytes_per_sec": 0, 00:15:25.041 "w_mbytes_per_sec": 0 00:15:25.041 }, 00:15:25.041 "claimed": false, 00:15:25.041 "zoned": false, 00:15:25.041 "supported_io_types": { 00:15:25.041 "read": true, 00:15:25.041 "write": true, 00:15:25.041 "unmap": false, 00:15:25.041 "flush": false, 00:15:25.041 "reset": true, 00:15:25.041 "nvme_admin": false, 00:15:25.041 "nvme_io": false, 00:15:25.041 "nvme_io_md": false, 00:15:25.041 "write_zeroes": true, 00:15:25.041 "zcopy": false, 00:15:25.041 "get_zone_info": false, 00:15:25.041 "zone_management": false, 00:15:25.041 "zone_append": false, 00:15:25.041 "compare": false, 00:15:25.041 "compare_and_write": false, 00:15:25.041 "abort": false, 00:15:25.041 "seek_hole": false, 00:15:25.041 "seek_data": false, 00:15:25.041 "copy": false, 00:15:25.041 "nvme_iov_md": false 00:15:25.041 }, 00:15:25.041 "memory_domains": [ 00:15:25.041 { 00:15:25.041 "dma_device_id": "system", 00:15:25.041 "dma_device_type": 1 00:15:25.041 }, 00:15:25.041 { 00:15:25.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.041 "dma_device_type": 2 00:15:25.041 }, 00:15:25.041 { 00:15:25.041 "dma_device_id": "system", 00:15:25.041 "dma_device_type": 1 00:15:25.041 }, 00:15:25.041 { 00:15:25.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.041 "dma_device_type": 2 00:15:25.041 } 00:15:25.041 ], 00:15:25.041 "driver_specific": { 00:15:25.041 "raid": { 00:15:25.041 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:25.041 "strip_size_kb": 0, 00:15:25.041 "state": "online", 00:15:25.041 "raid_level": "raid1", 00:15:25.041 "superblock": true, 00:15:25.041 "num_base_bdevs": 2, 00:15:25.041 "num_base_bdevs_discovered": 2, 00:15:25.041 "num_base_bdevs_operational": 2, 00:15:25.041 "base_bdevs_list": [ 00:15:25.041 { 00:15:25.041 "name": "pt1", 00:15:25.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.041 "is_configured": true, 00:15:25.041 "data_offset": 2048, 00:15:25.041 "data_size": 63488 00:15:25.041 }, 00:15:25.041 { 00:15:25.041 "name": "pt2", 00:15:25.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.041 "is_configured": true, 00:15:25.041 "data_offset": 2048, 00:15:25.041 "data_size": 63488 00:15:25.041 } 00:15:25.041 ] 00:15:25.041 } 00:15:25.041 } 00:15:25.041 }' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:25.041 pt2' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.041 09:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.042 [2024-11-06 09:08:24.023965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5bf23fd0-1787-4468-93b7-ae8d15e448b5 '!=' 5bf23fd0-1787-4468-93b7-ae8d15e448b5 ']' 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.042 [2024-11-06 09:08:24.055730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.042 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.299 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.299 "name": "raid_bdev1", 00:15:25.299 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:25.299 "strip_size_kb": 0, 00:15:25.299 "state": "online", 00:15:25.299 "raid_level": "raid1", 00:15:25.299 "superblock": true, 00:15:25.299 "num_base_bdevs": 2, 00:15:25.299 "num_base_bdevs_discovered": 1, 00:15:25.299 "num_base_bdevs_operational": 1, 00:15:25.299 "base_bdevs_list": [ 00:15:25.299 { 00:15:25.299 "name": null, 00:15:25.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.299 "is_configured": false, 00:15:25.299 "data_offset": 0, 00:15:25.299 "data_size": 63488 00:15:25.299 }, 00:15:25.300 { 00:15:25.300 "name": "pt2", 00:15:25.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.300 "is_configured": true, 00:15:25.300 "data_offset": 2048, 00:15:25.300 "data_size": 63488 00:15:25.300 } 00:15:25.300 ] 00:15:25.300 }' 00:15:25.300 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.300 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 [2024-11-06 09:08:24.459155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.559 [2024-11-06 09:08:24.459330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.559 [2024-11-06 09:08:24.459523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.559 [2024-11-06 09:08:24.459581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.559 [2024-11-06 09:08:24.459597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 [2024-11-06 09:08:24.531035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.559 [2024-11-06 09:08:24.531106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.559 [2024-11-06 09:08:24.531127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:25.559 [2024-11-06 09:08:24.531142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.559 [2024-11-06 09:08:24.533743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.559 [2024-11-06 09:08:24.533894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.559 [2024-11-06 09:08:24.534003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:25.559 [2024-11-06 09:08:24.534057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.559 [2024-11-06 09:08:24.534180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:25.559 [2024-11-06 09:08:24.534196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:25.559 [2024-11-06 09:08:24.534460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.559 [2024-11-06 09:08:24.534601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:25.559 [2024-11-06 09:08:24.534611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:25.559 [2024-11-06 09:08:24.534753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.559 pt2 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.559 "name": "raid_bdev1", 00:15:25.559 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:25.559 "strip_size_kb": 0, 00:15:25.559 "state": "online", 00:15:25.559 "raid_level": "raid1", 00:15:25.559 "superblock": true, 00:15:25.559 "num_base_bdevs": 2, 00:15:25.559 "num_base_bdevs_discovered": 1, 00:15:25.559 "num_base_bdevs_operational": 1, 00:15:25.559 "base_bdevs_list": [ 00:15:25.559 { 00:15:25.559 "name": null, 00:15:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.559 "is_configured": false, 00:15:25.559 "data_offset": 2048, 00:15:25.559 "data_size": 63488 00:15:25.559 }, 00:15:25.559 { 00:15:25.559 "name": "pt2", 00:15:25.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.559 "is_configured": true, 00:15:25.559 "data_offset": 2048, 00:15:25.559 "data_size": 63488 00:15:25.559 } 00:15:25.559 ] 00:15:25.559 }' 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.559 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.126 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.126 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.126 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.126 [2024-11-06 09:08:24.978401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.126 [2024-11-06 09:08:24.978434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.126 [2024-11-06 09:08:24.978515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.126 [2024-11-06 09:08:24.978573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.126 [2024-11-06 09:08:24.978585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:26.126 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.126 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:26.126 09:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.127 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.127 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.127 09:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.127 [2024-11-06 09:08:25.026372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.127 [2024-11-06 09:08:25.026440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.127 [2024-11-06 09:08:25.026465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:26.127 [2024-11-06 09:08:25.026479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.127 [2024-11-06 09:08:25.029042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.127 [2024-11-06 09:08:25.029083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.127 [2024-11-06 09:08:25.029175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:26.127 [2024-11-06 09:08:25.029226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.127 [2024-11-06 09:08:25.029406] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:26.127 [2024-11-06 09:08:25.029420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.127 [2024-11-06 09:08:25.029438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:26.127 [2024-11-06 09:08:25.029501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.127 [2024-11-06 09:08:25.029616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:26.127 [2024-11-06 09:08:25.029627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:26.127 [2024-11-06 09:08:25.029908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:26.127 [2024-11-06 09:08:25.030055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:26.127 [2024-11-06 09:08:25.030070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:26.127 [2024-11-06 09:08:25.030213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.127 pt1 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.127 "name": "raid_bdev1", 00:15:26.127 "uuid": "5bf23fd0-1787-4468-93b7-ae8d15e448b5", 00:15:26.127 "strip_size_kb": 0, 00:15:26.127 "state": "online", 00:15:26.127 "raid_level": "raid1", 00:15:26.127 "superblock": true, 00:15:26.127 "num_base_bdevs": 2, 00:15:26.127 "num_base_bdevs_discovered": 1, 00:15:26.127 "num_base_bdevs_operational": 1, 00:15:26.127 "base_bdevs_list": [ 00:15:26.127 { 00:15:26.127 "name": null, 00:15:26.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.127 "is_configured": false, 00:15:26.127 "data_offset": 2048, 00:15:26.127 "data_size": 63488 00:15:26.127 }, 00:15:26.127 { 00:15:26.127 "name": "pt2", 00:15:26.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.127 "is_configured": true, 00:15:26.127 "data_offset": 2048, 00:15:26.127 "data_size": 63488 00:15:26.127 } 00:15:26.127 ] 00:15:26.127 }' 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.127 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.399 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:26.399 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.399 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:26.399 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.658 [2024-11-06 09:08:25.469951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5bf23fd0-1787-4468-93b7-ae8d15e448b5 '!=' 5bf23fd0-1787-4468-93b7-ae8d15e448b5 ']' 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62985 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62985 ']' 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62985 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62985 00:15:26.658 killing process with pid 62985 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:26.658 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:26.659 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62985' 00:15:26.659 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62985 00:15:26.659 [2024-11-06 09:08:25.565779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.659 09:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62985 00:15:26.659 [2024-11-06 09:08:25.565877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.659 [2024-11-06 09:08:25.565927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.659 [2024-11-06 09:08:25.565946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:26.917 [2024-11-06 09:08:25.778780] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.294 09:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:28.294 00:15:28.294 real 0m6.031s 00:15:28.294 user 0m9.082s 00:15:28.294 sys 0m1.113s 00:15:28.294 ************************************ 00:15:28.294 END TEST raid_superblock_test 00:15:28.294 ************************************ 00:15:28.294 09:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.294 09:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.294 09:08:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:15:28.294 09:08:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:28.294 09:08:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.294 09:08:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.294 ************************************ 00:15:28.294 START TEST raid_read_error_test 00:15:28.294 ************************************ 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NWNPbJy2zr 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63315 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63315 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63315 ']' 00:15:28.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.294 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.295 09:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.295 [2024-11-06 09:08:27.157578] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:28.295 [2024-11-06 09:08:27.157726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63315 ] 00:15:28.552 [2024-11-06 09:08:27.339920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.552 [2024-11-06 09:08:27.467521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.810 [2024-11-06 09:08:27.710012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.810 [2024-11-06 09:08:27.710083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.069 BaseBdev1_malloc 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.069 true 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.069 [2024-11-06 09:08:28.079529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:29.069 [2024-11-06 09:08:28.079594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.069 [2024-11-06 09:08:28.079619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:29.069 [2024-11-06 09:08:28.079634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.069 [2024-11-06 09:08:28.082239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.069 [2024-11-06 09:08:28.082439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:29.069 BaseBdev1 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.069 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.327 BaseBdev2_malloc 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.327 true 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.327 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.327 [2024-11-06 09:08:28.138632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:29.327 [2024-11-06 09:08:28.138695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.327 [2024-11-06 09:08:28.138717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:29.328 [2024-11-06 09:08:28.138744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.328 [2024-11-06 09:08:28.141273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.328 [2024-11-06 09:08:28.141335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.328 BaseBdev2 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.328 [2024-11-06 09:08:28.146696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.328 [2024-11-06 09:08:28.148966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.328 [2024-11-06 09:08:28.149372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:29.328 [2024-11-06 09:08:28.149399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:29.328 [2024-11-06 09:08:28.149688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:29.328 [2024-11-06 09:08:28.149887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:29.328 [2024-11-06 09:08:28.149901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:29.328 [2024-11-06 09:08:28.150069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.328 "name": "raid_bdev1", 00:15:29.328 "uuid": "a32e0128-447a-4e73-b06e-cbc240a1caff", 00:15:29.328 "strip_size_kb": 0, 00:15:29.328 "state": "online", 00:15:29.328 "raid_level": "raid1", 00:15:29.328 "superblock": true, 00:15:29.328 "num_base_bdevs": 2, 00:15:29.328 "num_base_bdevs_discovered": 2, 00:15:29.328 "num_base_bdevs_operational": 2, 00:15:29.328 "base_bdevs_list": [ 00:15:29.328 { 00:15:29.328 "name": "BaseBdev1", 00:15:29.328 "uuid": "3fb761ba-ef43-50f8-8b97-2940948c0b05", 00:15:29.328 "is_configured": true, 00:15:29.328 "data_offset": 2048, 00:15:29.328 "data_size": 63488 00:15:29.328 }, 00:15:29.328 { 00:15:29.328 "name": "BaseBdev2", 00:15:29.328 "uuid": "5bca4a00-e493-58f9-b739-15eca0e90018", 00:15:29.328 "is_configured": true, 00:15:29.328 "data_offset": 2048, 00:15:29.328 "data_size": 63488 00:15:29.328 } 00:15:29.328 ] 00:15:29.328 }' 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.328 09:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.587 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:29.587 09:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:29.587 [2024-11-06 09:08:28.615647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.525 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.526 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.785 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.785 "name": "raid_bdev1", 00:15:30.785 "uuid": "a32e0128-447a-4e73-b06e-cbc240a1caff", 00:15:30.785 "strip_size_kb": 0, 00:15:30.785 "state": "online", 00:15:30.785 "raid_level": "raid1", 00:15:30.785 "superblock": true, 00:15:30.785 "num_base_bdevs": 2, 00:15:30.785 "num_base_bdevs_discovered": 2, 00:15:30.785 "num_base_bdevs_operational": 2, 00:15:30.785 "base_bdevs_list": [ 00:15:30.785 { 00:15:30.785 "name": "BaseBdev1", 00:15:30.785 "uuid": "3fb761ba-ef43-50f8-8b97-2940948c0b05", 00:15:30.785 "is_configured": true, 00:15:30.785 "data_offset": 2048, 00:15:30.785 "data_size": 63488 00:15:30.785 }, 00:15:30.785 { 00:15:30.785 "name": "BaseBdev2", 00:15:30.785 "uuid": "5bca4a00-e493-58f9-b739-15eca0e90018", 00:15:30.785 "is_configured": true, 00:15:30.785 "data_offset": 2048, 00:15:30.785 "data_size": 63488 00:15:30.785 } 00:15:30.785 ] 00:15:30.785 }' 00:15:30.785 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.785 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.043 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.044 [2024-11-06 09:08:29.986838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.044 [2024-11-06 09:08:29.987016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.044 [2024-11-06 09:08:29.989907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.044 [2024-11-06 09:08:29.989961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.044 [2024-11-06 09:08:29.990047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.044 [2024-11-06 09:08:29.990063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:31.044 { 00:15:31.044 "results": [ 00:15:31.044 { 00:15:31.044 "job": "raid_bdev1", 00:15:31.044 "core_mask": "0x1", 00:15:31.044 "workload": "randrw", 00:15:31.044 "percentage": 50, 00:15:31.044 "status": "finished", 00:15:31.044 "queue_depth": 1, 00:15:31.044 "io_size": 131072, 00:15:31.044 "runtime": 1.371252, 00:15:31.044 "iops": 17707.90489275494, 00:15:31.044 "mibps": 2213.4881115943676, 00:15:31.044 "io_failed": 0, 00:15:31.044 "io_timeout": 0, 00:15:31.044 "avg_latency_us": 53.63407379621442, 00:15:31.044 "min_latency_us": 24.366265060240963, 00:15:31.044 "max_latency_us": 1552.8610441767069 00:15:31.044 } 00:15:31.044 ], 00:15:31.044 "core_count": 1 00:15:31.044 } 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63315 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63315 ']' 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63315 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:31.044 09:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.044 09:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63315 00:15:31.044 09:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.044 09:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.044 09:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63315' 00:15:31.044 killing process with pid 63315 00:15:31.044 09:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63315 00:15:31.044 [2024-11-06 09:08:30.040045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.044 09:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63315 00:15:31.301 [2024-11-06 09:08:30.182605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NWNPbJy2zr 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:32.677 09:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:32.677 00:15:32.677 real 0m4.424s 00:15:32.677 user 0m5.217s 00:15:32.677 sys 0m0.610s 00:15:32.677 ************************************ 00:15:32.677 END TEST raid_read_error_test 00:15:32.678 ************************************ 00:15:32.678 09:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.678 09:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.678 09:08:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:15:32.678 09:08:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:32.678 09:08:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.678 09:08:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.678 ************************************ 00:15:32.678 START TEST raid_write_error_test 00:15:32.678 ************************************ 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oWsyv5GziH 00:15:32.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63461 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63461 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63461 ']' 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.678 09:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:32.678 [2024-11-06 09:08:31.641073] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:32.678 [2024-11-06 09:08:31.641209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63461 ] 00:15:32.936 [2024-11-06 09:08:31.826683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.936 [2024-11-06 09:08:31.970643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.193 [2024-11-06 09:08:32.196313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.193 [2024-11-06 09:08:32.196386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.761 BaseBdev1_malloc 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.761 true 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.761 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.761 [2024-11-06 09:08:32.574138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:33.761 [2024-11-06 09:08:32.574205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.762 [2024-11-06 09:08:32.574245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:33.762 [2024-11-06 09:08:32.574260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.762 [2024-11-06 09:08:32.576711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.762 [2024-11-06 09:08:32.576912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.762 BaseBdev1 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 BaseBdev2_malloc 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 true 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 [2024-11-06 09:08:32.643721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:33.762 [2024-11-06 09:08:32.643775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.762 [2024-11-06 09:08:32.643793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:33.762 [2024-11-06 09:08:32.643806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.762 [2024-11-06 09:08:32.646230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.762 [2024-11-06 09:08:32.646418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.762 BaseBdev2 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 [2024-11-06 09:08:32.655763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.762 [2024-11-06 09:08:32.657947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.762 [2024-11-06 09:08:32.658286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:33.762 [2024-11-06 09:08:32.658310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:33.762 [2024-11-06 09:08:32.658553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:33.762 [2024-11-06 09:08:32.658732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:33.762 [2024-11-06 09:08:32.658744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:33.762 [2024-11-06 09:08:32.658883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.762 "name": "raid_bdev1", 00:15:33.762 "uuid": "10f1d6d0-1b7e-44d1-9d9a-d30e5bd24dda", 00:15:33.762 "strip_size_kb": 0, 00:15:33.762 "state": "online", 00:15:33.762 "raid_level": "raid1", 00:15:33.762 "superblock": true, 00:15:33.762 "num_base_bdevs": 2, 00:15:33.762 "num_base_bdevs_discovered": 2, 00:15:33.762 "num_base_bdevs_operational": 2, 00:15:33.762 "base_bdevs_list": [ 00:15:33.762 { 00:15:33.762 "name": "BaseBdev1", 00:15:33.762 "uuid": "838a3f2e-da3d-5706-a14f-7970caec7fde", 00:15:33.762 "is_configured": true, 00:15:33.762 "data_offset": 2048, 00:15:33.762 "data_size": 63488 00:15:33.762 }, 00:15:33.762 { 00:15:33.762 "name": "BaseBdev2", 00:15:33.762 "uuid": "8151320d-144a-50a1-b763-623f48c5f597", 00:15:33.762 "is_configured": true, 00:15:33.762 "data_offset": 2048, 00:15:33.762 "data_size": 63488 00:15:33.762 } 00:15:33.762 ] 00:15:33.762 }' 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.762 09:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.330 09:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:34.330 09:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:34.330 [2024-11-06 09:08:33.196199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.268 [2024-11-06 09:08:34.108642] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:35.268 [2024-11-06 09:08:34.108707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.268 [2024-11-06 09:08:34.108911] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.268 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.268 "name": "raid_bdev1", 00:15:35.269 "uuid": "10f1d6d0-1b7e-44d1-9d9a-d30e5bd24dda", 00:15:35.269 "strip_size_kb": 0, 00:15:35.269 "state": "online", 00:15:35.269 "raid_level": "raid1", 00:15:35.269 "superblock": true, 00:15:35.269 "num_base_bdevs": 2, 00:15:35.269 "num_base_bdevs_discovered": 1, 00:15:35.269 "num_base_bdevs_operational": 1, 00:15:35.269 "base_bdevs_list": [ 00:15:35.269 { 00:15:35.269 "name": null, 00:15:35.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.269 "is_configured": false, 00:15:35.269 "data_offset": 0, 00:15:35.269 "data_size": 63488 00:15:35.269 }, 00:15:35.269 { 00:15:35.269 "name": "BaseBdev2", 00:15:35.269 "uuid": "8151320d-144a-50a1-b763-623f48c5f597", 00:15:35.269 "is_configured": true, 00:15:35.269 "data_offset": 2048, 00:15:35.269 "data_size": 63488 00:15:35.269 } 00:15:35.269 ] 00:15:35.269 }' 00:15:35.269 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.269 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.531 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.531 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.531 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.531 [2024-11-06 09:08:34.533540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.531 [2024-11-06 09:08:34.533583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.531 [2024-11-06 09:08:34.536183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.531 [2024-11-06 09:08:34.536228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.531 [2024-11-06 09:08:34.536305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.531 [2024-11-06 09:08:34.536318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:35.531 { 00:15:35.531 "results": [ 00:15:35.531 { 00:15:35.531 "job": "raid_bdev1", 00:15:35.531 "core_mask": "0x1", 00:15:35.531 "workload": "randrw", 00:15:35.531 "percentage": 50, 00:15:35.531 "status": "finished", 00:15:35.531 "queue_depth": 1, 00:15:35.531 "io_size": 131072, 00:15:35.531 "runtime": 1.337149, 00:15:35.531 "iops": 22351.286206697983, 00:15:35.531 "mibps": 2793.910775837248, 00:15:35.531 "io_failed": 0, 00:15:35.531 "io_timeout": 0, 00:15:35.531 "avg_latency_us": 42.03845924064982, 00:15:35.531 "min_latency_us": 23.132530120481928, 00:15:35.531 "max_latency_us": 1480.4819277108434 00:15:35.531 } 00:15:35.531 ], 00:15:35.531 "core_count": 1 00:15:35.531 } 00:15:35.531 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.532 09:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63461 00:15:35.532 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63461 ']' 00:15:35.532 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63461 00:15:35.532 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:15:35.532 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.532 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63461 00:15:35.832 killing process with pid 63461 00:15:35.832 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:35.832 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:35.832 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63461' 00:15:35.832 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63461 00:15:35.832 [2024-11-06 09:08:34.587990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.832 09:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63461 00:15:35.832 [2024-11-06 09:08:34.723592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oWsyv5GziH 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:37.209 00:15:37.209 real 0m4.402s 00:15:37.209 user 0m5.218s 00:15:37.209 sys 0m0.618s 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:37.209 09:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 ************************************ 00:15:37.209 END TEST raid_write_error_test 00:15:37.209 ************************************ 00:15:37.209 09:08:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:15:37.209 09:08:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:37.209 09:08:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:37.209 09:08:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:37.209 09:08:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.209 09:08:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 ************************************ 00:15:37.209 START TEST raid_state_function_test 00:15:37.209 ************************************ 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63599 00:15:37.209 Process raid pid: 63599 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63599' 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63599 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63599 ']' 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.209 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:37.210 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.210 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:37.210 09:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.210 [2024-11-06 09:08:36.099575] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:37.210 [2024-11-06 09:08:36.099722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.468 [2024-11-06 09:08:36.268463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.468 [2024-11-06 09:08:36.412674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.726 [2024-11-06 09:08:36.660647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.726 [2024-11-06 09:08:36.660697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.985 [2024-11-06 09:08:37.013421] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.985 [2024-11-06 09:08:37.013481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.985 [2024-11-06 09:08:37.013495] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.985 [2024-11-06 09:08:37.013509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.985 [2024-11-06 09:08:37.013518] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.985 [2024-11-06 09:08:37.013530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.985 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.244 "name": "Existed_Raid", 00:15:38.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.244 "strip_size_kb": 64, 00:15:38.244 "state": "configuring", 00:15:38.244 "raid_level": "raid0", 00:15:38.244 "superblock": false, 00:15:38.244 "num_base_bdevs": 3, 00:15:38.244 "num_base_bdevs_discovered": 0, 00:15:38.244 "num_base_bdevs_operational": 3, 00:15:38.244 "base_bdevs_list": [ 00:15:38.244 { 00:15:38.244 "name": "BaseBdev1", 00:15:38.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.244 "is_configured": false, 00:15:38.244 "data_offset": 0, 00:15:38.244 "data_size": 0 00:15:38.244 }, 00:15:38.244 { 00:15:38.244 "name": "BaseBdev2", 00:15:38.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.244 "is_configured": false, 00:15:38.244 "data_offset": 0, 00:15:38.244 "data_size": 0 00:15:38.244 }, 00:15:38.244 { 00:15:38.244 "name": "BaseBdev3", 00:15:38.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.244 "is_configured": false, 00:15:38.244 "data_offset": 0, 00:15:38.244 "data_size": 0 00:15:38.244 } 00:15:38.244 ] 00:15:38.244 }' 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.244 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.503 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.503 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.503 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.503 [2024-11-06 09:08:37.436774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.503 [2024-11-06 09:08:37.436815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:38.503 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.503 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.503 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.504 [2024-11-06 09:08:37.448740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.504 [2024-11-06 09:08:37.448795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.504 [2024-11-06 09:08:37.448806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.504 [2024-11-06 09:08:37.448819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.504 [2024-11-06 09:08:37.448827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.504 [2024-11-06 09:08:37.448839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.504 [2024-11-06 09:08:37.500056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.504 BaseBdev1 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.504 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.504 [ 00:15:38.504 { 00:15:38.504 "name": "BaseBdev1", 00:15:38.504 "aliases": [ 00:15:38.504 "4797ae78-86e2-4e16-8e05-d67d20b46d1c" 00:15:38.504 ], 00:15:38.504 "product_name": "Malloc disk", 00:15:38.504 "block_size": 512, 00:15:38.504 "num_blocks": 65536, 00:15:38.504 "uuid": "4797ae78-86e2-4e16-8e05-d67d20b46d1c", 00:15:38.504 "assigned_rate_limits": { 00:15:38.504 "rw_ios_per_sec": 0, 00:15:38.504 "rw_mbytes_per_sec": 0, 00:15:38.504 "r_mbytes_per_sec": 0, 00:15:38.504 "w_mbytes_per_sec": 0 00:15:38.504 }, 00:15:38.504 "claimed": true, 00:15:38.504 "claim_type": "exclusive_write", 00:15:38.504 "zoned": false, 00:15:38.504 "supported_io_types": { 00:15:38.504 "read": true, 00:15:38.504 "write": true, 00:15:38.504 "unmap": true, 00:15:38.504 "flush": true, 00:15:38.504 "reset": true, 00:15:38.504 "nvme_admin": false, 00:15:38.504 "nvme_io": false, 00:15:38.504 "nvme_io_md": false, 00:15:38.504 "write_zeroes": true, 00:15:38.504 "zcopy": true, 00:15:38.504 "get_zone_info": false, 00:15:38.504 "zone_management": false, 00:15:38.504 "zone_append": false, 00:15:38.504 "compare": false, 00:15:38.504 "compare_and_write": false, 00:15:38.504 "abort": true, 00:15:38.504 "seek_hole": false, 00:15:38.504 "seek_data": false, 00:15:38.504 "copy": true, 00:15:38.504 "nvme_iov_md": false 00:15:38.504 }, 00:15:38.504 "memory_domains": [ 00:15:38.504 { 00:15:38.504 "dma_device_id": "system", 00:15:38.504 "dma_device_type": 1 00:15:38.763 }, 00:15:38.763 { 00:15:38.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.763 "dma_device_type": 2 00:15:38.763 } 00:15:38.763 ], 00:15:38.763 "driver_specific": {} 00:15:38.763 } 00:15:38.763 ] 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.763 "name": "Existed_Raid", 00:15:38.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.763 "strip_size_kb": 64, 00:15:38.763 "state": "configuring", 00:15:38.763 "raid_level": "raid0", 00:15:38.763 "superblock": false, 00:15:38.763 "num_base_bdevs": 3, 00:15:38.763 "num_base_bdevs_discovered": 1, 00:15:38.763 "num_base_bdevs_operational": 3, 00:15:38.763 "base_bdevs_list": [ 00:15:38.763 { 00:15:38.763 "name": "BaseBdev1", 00:15:38.763 "uuid": "4797ae78-86e2-4e16-8e05-d67d20b46d1c", 00:15:38.763 "is_configured": true, 00:15:38.763 "data_offset": 0, 00:15:38.763 "data_size": 65536 00:15:38.763 }, 00:15:38.763 { 00:15:38.763 "name": "BaseBdev2", 00:15:38.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.763 "is_configured": false, 00:15:38.763 "data_offset": 0, 00:15:38.763 "data_size": 0 00:15:38.763 }, 00:15:38.763 { 00:15:38.763 "name": "BaseBdev3", 00:15:38.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.763 "is_configured": false, 00:15:38.763 "data_offset": 0, 00:15:38.763 "data_size": 0 00:15:38.763 } 00:15:38.763 ] 00:15:38.763 }' 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.763 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.020 [2024-11-06 09:08:37.935522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.020 [2024-11-06 09:08:37.935584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.020 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.020 [2024-11-06 09:08:37.947556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.021 [2024-11-06 09:08:37.949774] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.021 [2024-11-06 09:08:37.949823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.021 [2024-11-06 09:08:37.949836] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.021 [2024-11-06 09:08:37.949865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.021 "name": "Existed_Raid", 00:15:39.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.021 "strip_size_kb": 64, 00:15:39.021 "state": "configuring", 00:15:39.021 "raid_level": "raid0", 00:15:39.021 "superblock": false, 00:15:39.021 "num_base_bdevs": 3, 00:15:39.021 "num_base_bdevs_discovered": 1, 00:15:39.021 "num_base_bdevs_operational": 3, 00:15:39.021 "base_bdevs_list": [ 00:15:39.021 { 00:15:39.021 "name": "BaseBdev1", 00:15:39.021 "uuid": "4797ae78-86e2-4e16-8e05-d67d20b46d1c", 00:15:39.021 "is_configured": true, 00:15:39.021 "data_offset": 0, 00:15:39.021 "data_size": 65536 00:15:39.021 }, 00:15:39.021 { 00:15:39.021 "name": "BaseBdev2", 00:15:39.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.021 "is_configured": false, 00:15:39.021 "data_offset": 0, 00:15:39.021 "data_size": 0 00:15:39.021 }, 00:15:39.021 { 00:15:39.021 "name": "BaseBdev3", 00:15:39.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.021 "is_configured": false, 00:15:39.021 "data_offset": 0, 00:15:39.021 "data_size": 0 00:15:39.021 } 00:15:39.021 ] 00:15:39.021 }' 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.021 09:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 [2024-11-06 09:08:38.412188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.588 BaseBdev2 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 [ 00:15:39.588 { 00:15:39.588 "name": "BaseBdev2", 00:15:39.588 "aliases": [ 00:15:39.588 "5b78d110-38ca-4fb4-bec5-8e131d9b988a" 00:15:39.588 ], 00:15:39.588 "product_name": "Malloc disk", 00:15:39.588 "block_size": 512, 00:15:39.588 "num_blocks": 65536, 00:15:39.588 "uuid": "5b78d110-38ca-4fb4-bec5-8e131d9b988a", 00:15:39.588 "assigned_rate_limits": { 00:15:39.588 "rw_ios_per_sec": 0, 00:15:39.588 "rw_mbytes_per_sec": 0, 00:15:39.588 "r_mbytes_per_sec": 0, 00:15:39.588 "w_mbytes_per_sec": 0 00:15:39.588 }, 00:15:39.588 "claimed": true, 00:15:39.588 "claim_type": "exclusive_write", 00:15:39.588 "zoned": false, 00:15:39.588 "supported_io_types": { 00:15:39.588 "read": true, 00:15:39.588 "write": true, 00:15:39.588 "unmap": true, 00:15:39.588 "flush": true, 00:15:39.588 "reset": true, 00:15:39.588 "nvme_admin": false, 00:15:39.588 "nvme_io": false, 00:15:39.588 "nvme_io_md": false, 00:15:39.588 "write_zeroes": true, 00:15:39.588 "zcopy": true, 00:15:39.588 "get_zone_info": false, 00:15:39.588 "zone_management": false, 00:15:39.588 "zone_append": false, 00:15:39.588 "compare": false, 00:15:39.588 "compare_and_write": false, 00:15:39.588 "abort": true, 00:15:39.588 "seek_hole": false, 00:15:39.588 "seek_data": false, 00:15:39.588 "copy": true, 00:15:39.588 "nvme_iov_md": false 00:15:39.588 }, 00:15:39.588 "memory_domains": [ 00:15:39.588 { 00:15:39.588 "dma_device_id": "system", 00:15:39.588 "dma_device_type": 1 00:15:39.588 }, 00:15:39.588 { 00:15:39.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.588 "dma_device_type": 2 00:15:39.588 } 00:15:39.588 ], 00:15:39.588 "driver_specific": {} 00:15:39.588 } 00:15:39.588 ] 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.588 "name": "Existed_Raid", 00:15:39.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.588 "strip_size_kb": 64, 00:15:39.588 "state": "configuring", 00:15:39.588 "raid_level": "raid0", 00:15:39.588 "superblock": false, 00:15:39.588 "num_base_bdevs": 3, 00:15:39.588 "num_base_bdevs_discovered": 2, 00:15:39.588 "num_base_bdevs_operational": 3, 00:15:39.588 "base_bdevs_list": [ 00:15:39.588 { 00:15:39.588 "name": "BaseBdev1", 00:15:39.588 "uuid": "4797ae78-86e2-4e16-8e05-d67d20b46d1c", 00:15:39.588 "is_configured": true, 00:15:39.588 "data_offset": 0, 00:15:39.588 "data_size": 65536 00:15:39.588 }, 00:15:39.588 { 00:15:39.588 "name": "BaseBdev2", 00:15:39.588 "uuid": "5b78d110-38ca-4fb4-bec5-8e131d9b988a", 00:15:39.588 "is_configured": true, 00:15:39.588 "data_offset": 0, 00:15:39.588 "data_size": 65536 00:15:39.588 }, 00:15:39.588 { 00:15:39.588 "name": "BaseBdev3", 00:15:39.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.588 "is_configured": false, 00:15:39.588 "data_offset": 0, 00:15:39.588 "data_size": 0 00:15:39.588 } 00:15:39.588 ] 00:15:39.588 }' 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.588 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.155 [2024-11-06 09:08:38.949198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.155 [2024-11-06 09:08:38.949251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:40.155 [2024-11-06 09:08:38.949268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:40.155 [2024-11-06 09:08:38.949752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:40.155 [2024-11-06 09:08:38.949927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:40.155 [2024-11-06 09:08:38.949939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:40.155 [2024-11-06 09:08:38.950224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.155 BaseBdev3 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.155 [ 00:15:40.155 { 00:15:40.155 "name": "BaseBdev3", 00:15:40.155 "aliases": [ 00:15:40.155 "3d8e2d7e-b901-4e23-b64d-b1d1cb2488da" 00:15:40.155 ], 00:15:40.155 "product_name": "Malloc disk", 00:15:40.155 "block_size": 512, 00:15:40.155 "num_blocks": 65536, 00:15:40.155 "uuid": "3d8e2d7e-b901-4e23-b64d-b1d1cb2488da", 00:15:40.155 "assigned_rate_limits": { 00:15:40.155 "rw_ios_per_sec": 0, 00:15:40.155 "rw_mbytes_per_sec": 0, 00:15:40.155 "r_mbytes_per_sec": 0, 00:15:40.155 "w_mbytes_per_sec": 0 00:15:40.155 }, 00:15:40.155 "claimed": true, 00:15:40.155 "claim_type": "exclusive_write", 00:15:40.155 "zoned": false, 00:15:40.155 "supported_io_types": { 00:15:40.155 "read": true, 00:15:40.155 "write": true, 00:15:40.155 "unmap": true, 00:15:40.155 "flush": true, 00:15:40.155 "reset": true, 00:15:40.155 "nvme_admin": false, 00:15:40.155 "nvme_io": false, 00:15:40.155 "nvme_io_md": false, 00:15:40.155 "write_zeroes": true, 00:15:40.155 "zcopy": true, 00:15:40.155 "get_zone_info": false, 00:15:40.155 "zone_management": false, 00:15:40.155 "zone_append": false, 00:15:40.155 "compare": false, 00:15:40.155 "compare_and_write": false, 00:15:40.155 "abort": true, 00:15:40.155 "seek_hole": false, 00:15:40.155 "seek_data": false, 00:15:40.155 "copy": true, 00:15:40.155 "nvme_iov_md": false 00:15:40.155 }, 00:15:40.155 "memory_domains": [ 00:15:40.155 { 00:15:40.155 "dma_device_id": "system", 00:15:40.155 "dma_device_type": 1 00:15:40.155 }, 00:15:40.155 { 00:15:40.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.155 "dma_device_type": 2 00:15:40.155 } 00:15:40.155 ], 00:15:40.155 "driver_specific": {} 00:15:40.155 } 00:15:40.155 ] 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.155 09:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.155 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.155 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.155 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.155 "name": "Existed_Raid", 00:15:40.155 "uuid": "3d343dbf-a109-4e9a-bdcc-734b0def2c56", 00:15:40.155 "strip_size_kb": 64, 00:15:40.155 "state": "online", 00:15:40.155 "raid_level": "raid0", 00:15:40.155 "superblock": false, 00:15:40.155 "num_base_bdevs": 3, 00:15:40.155 "num_base_bdevs_discovered": 3, 00:15:40.155 "num_base_bdevs_operational": 3, 00:15:40.155 "base_bdevs_list": [ 00:15:40.155 { 00:15:40.155 "name": "BaseBdev1", 00:15:40.155 "uuid": "4797ae78-86e2-4e16-8e05-d67d20b46d1c", 00:15:40.155 "is_configured": true, 00:15:40.155 "data_offset": 0, 00:15:40.155 "data_size": 65536 00:15:40.155 }, 00:15:40.155 { 00:15:40.155 "name": "BaseBdev2", 00:15:40.155 "uuid": "5b78d110-38ca-4fb4-bec5-8e131d9b988a", 00:15:40.155 "is_configured": true, 00:15:40.155 "data_offset": 0, 00:15:40.155 "data_size": 65536 00:15:40.155 }, 00:15:40.155 { 00:15:40.155 "name": "BaseBdev3", 00:15:40.155 "uuid": "3d8e2d7e-b901-4e23-b64d-b1d1cb2488da", 00:15:40.155 "is_configured": true, 00:15:40.155 "data_offset": 0, 00:15:40.155 "data_size": 65536 00:15:40.155 } 00:15:40.155 ] 00:15:40.156 }' 00:15:40.156 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.156 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.414 [2024-11-06 09:08:39.420930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.414 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:40.674 "name": "Existed_Raid", 00:15:40.674 "aliases": [ 00:15:40.674 "3d343dbf-a109-4e9a-bdcc-734b0def2c56" 00:15:40.674 ], 00:15:40.674 "product_name": "Raid Volume", 00:15:40.674 "block_size": 512, 00:15:40.674 "num_blocks": 196608, 00:15:40.674 "uuid": "3d343dbf-a109-4e9a-bdcc-734b0def2c56", 00:15:40.674 "assigned_rate_limits": { 00:15:40.674 "rw_ios_per_sec": 0, 00:15:40.674 "rw_mbytes_per_sec": 0, 00:15:40.674 "r_mbytes_per_sec": 0, 00:15:40.674 "w_mbytes_per_sec": 0 00:15:40.674 }, 00:15:40.674 "claimed": false, 00:15:40.674 "zoned": false, 00:15:40.674 "supported_io_types": { 00:15:40.674 "read": true, 00:15:40.674 "write": true, 00:15:40.674 "unmap": true, 00:15:40.674 "flush": true, 00:15:40.674 "reset": true, 00:15:40.674 "nvme_admin": false, 00:15:40.674 "nvme_io": false, 00:15:40.674 "nvme_io_md": false, 00:15:40.674 "write_zeroes": true, 00:15:40.674 "zcopy": false, 00:15:40.674 "get_zone_info": false, 00:15:40.674 "zone_management": false, 00:15:40.674 "zone_append": false, 00:15:40.674 "compare": false, 00:15:40.674 "compare_and_write": false, 00:15:40.674 "abort": false, 00:15:40.674 "seek_hole": false, 00:15:40.674 "seek_data": false, 00:15:40.674 "copy": false, 00:15:40.674 "nvme_iov_md": false 00:15:40.674 }, 00:15:40.674 "memory_domains": [ 00:15:40.674 { 00:15:40.674 "dma_device_id": "system", 00:15:40.674 "dma_device_type": 1 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.674 "dma_device_type": 2 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "dma_device_id": "system", 00:15:40.674 "dma_device_type": 1 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.674 "dma_device_type": 2 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "dma_device_id": "system", 00:15:40.674 "dma_device_type": 1 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.674 "dma_device_type": 2 00:15:40.674 } 00:15:40.674 ], 00:15:40.674 "driver_specific": { 00:15:40.674 "raid": { 00:15:40.674 "uuid": "3d343dbf-a109-4e9a-bdcc-734b0def2c56", 00:15:40.674 "strip_size_kb": 64, 00:15:40.674 "state": "online", 00:15:40.674 "raid_level": "raid0", 00:15:40.674 "superblock": false, 00:15:40.674 "num_base_bdevs": 3, 00:15:40.674 "num_base_bdevs_discovered": 3, 00:15:40.674 "num_base_bdevs_operational": 3, 00:15:40.674 "base_bdevs_list": [ 00:15:40.674 { 00:15:40.674 "name": "BaseBdev1", 00:15:40.674 "uuid": "4797ae78-86e2-4e16-8e05-d67d20b46d1c", 00:15:40.674 "is_configured": true, 00:15:40.674 "data_offset": 0, 00:15:40.674 "data_size": 65536 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "name": "BaseBdev2", 00:15:40.674 "uuid": "5b78d110-38ca-4fb4-bec5-8e131d9b988a", 00:15:40.674 "is_configured": true, 00:15:40.674 "data_offset": 0, 00:15:40.674 "data_size": 65536 00:15:40.674 }, 00:15:40.674 { 00:15:40.674 "name": "BaseBdev3", 00:15:40.674 "uuid": "3d8e2d7e-b901-4e23-b64d-b1d1cb2488da", 00:15:40.674 "is_configured": true, 00:15:40.674 "data_offset": 0, 00:15:40.674 "data_size": 65536 00:15:40.674 } 00:15:40.674 ] 00:15:40.674 } 00:15:40.674 } 00:15:40.674 }' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:40.674 BaseBdev2 00:15:40.674 BaseBdev3' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 [2024-11-06 09:08:39.696306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.674 [2024-11-06 09:08:39.696340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.674 [2024-11-06 09:08:39.696396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.933 "name": "Existed_Raid", 00:15:40.933 "uuid": "3d343dbf-a109-4e9a-bdcc-734b0def2c56", 00:15:40.933 "strip_size_kb": 64, 00:15:40.933 "state": "offline", 00:15:40.933 "raid_level": "raid0", 00:15:40.933 "superblock": false, 00:15:40.933 "num_base_bdevs": 3, 00:15:40.933 "num_base_bdevs_discovered": 2, 00:15:40.933 "num_base_bdevs_operational": 2, 00:15:40.933 "base_bdevs_list": [ 00:15:40.933 { 00:15:40.933 "name": null, 00:15:40.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.933 "is_configured": false, 00:15:40.933 "data_offset": 0, 00:15:40.933 "data_size": 65536 00:15:40.933 }, 00:15:40.933 { 00:15:40.933 "name": "BaseBdev2", 00:15:40.933 "uuid": "5b78d110-38ca-4fb4-bec5-8e131d9b988a", 00:15:40.933 "is_configured": true, 00:15:40.933 "data_offset": 0, 00:15:40.933 "data_size": 65536 00:15:40.933 }, 00:15:40.933 { 00:15:40.933 "name": "BaseBdev3", 00:15:40.933 "uuid": "3d8e2d7e-b901-4e23-b64d-b1d1cb2488da", 00:15:40.933 "is_configured": true, 00:15:40.933 "data_offset": 0, 00:15:40.933 "data_size": 65536 00:15:40.933 } 00:15:40.933 ] 00:15:40.933 }' 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.933 09:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.191 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:41.191 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.191 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.191 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.191 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.191 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.449 [2024-11-06 09:08:40.256846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.449 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.449 [2024-11-06 09:08:40.391865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.449 [2024-11-06 09:08:40.391942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.708 BaseBdev2 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:41.708 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 [ 00:15:41.709 { 00:15:41.709 "name": "BaseBdev2", 00:15:41.709 "aliases": [ 00:15:41.709 "ae8c606e-04ee-474f-9788-6abd5a693ae7" 00:15:41.709 ], 00:15:41.709 "product_name": "Malloc disk", 00:15:41.709 "block_size": 512, 00:15:41.709 "num_blocks": 65536, 00:15:41.709 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:41.709 "assigned_rate_limits": { 00:15:41.709 "rw_ios_per_sec": 0, 00:15:41.709 "rw_mbytes_per_sec": 0, 00:15:41.709 "r_mbytes_per_sec": 0, 00:15:41.709 "w_mbytes_per_sec": 0 00:15:41.709 }, 00:15:41.709 "claimed": false, 00:15:41.709 "zoned": false, 00:15:41.709 "supported_io_types": { 00:15:41.709 "read": true, 00:15:41.709 "write": true, 00:15:41.709 "unmap": true, 00:15:41.709 "flush": true, 00:15:41.709 "reset": true, 00:15:41.709 "nvme_admin": false, 00:15:41.709 "nvme_io": false, 00:15:41.709 "nvme_io_md": false, 00:15:41.709 "write_zeroes": true, 00:15:41.709 "zcopy": true, 00:15:41.709 "get_zone_info": false, 00:15:41.709 "zone_management": false, 00:15:41.709 "zone_append": false, 00:15:41.709 "compare": false, 00:15:41.709 "compare_and_write": false, 00:15:41.709 "abort": true, 00:15:41.709 "seek_hole": false, 00:15:41.709 "seek_data": false, 00:15:41.709 "copy": true, 00:15:41.709 "nvme_iov_md": false 00:15:41.709 }, 00:15:41.709 "memory_domains": [ 00:15:41.709 { 00:15:41.709 "dma_device_id": "system", 00:15:41.709 "dma_device_type": 1 00:15:41.709 }, 00:15:41.709 { 00:15:41.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.709 "dma_device_type": 2 00:15:41.709 } 00:15:41.709 ], 00:15:41.709 "driver_specific": {} 00:15:41.709 } 00:15:41.709 ] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 BaseBdev3 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 [ 00:15:41.709 { 00:15:41.709 "name": "BaseBdev3", 00:15:41.709 "aliases": [ 00:15:41.709 "2f739532-0f34-44f7-95e1-00848c318be9" 00:15:41.709 ], 00:15:41.709 "product_name": "Malloc disk", 00:15:41.709 "block_size": 512, 00:15:41.709 "num_blocks": 65536, 00:15:41.709 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:41.709 "assigned_rate_limits": { 00:15:41.709 "rw_ios_per_sec": 0, 00:15:41.709 "rw_mbytes_per_sec": 0, 00:15:41.709 "r_mbytes_per_sec": 0, 00:15:41.709 "w_mbytes_per_sec": 0 00:15:41.709 }, 00:15:41.709 "claimed": false, 00:15:41.709 "zoned": false, 00:15:41.709 "supported_io_types": { 00:15:41.709 "read": true, 00:15:41.709 "write": true, 00:15:41.709 "unmap": true, 00:15:41.709 "flush": true, 00:15:41.709 "reset": true, 00:15:41.709 "nvme_admin": false, 00:15:41.709 "nvme_io": false, 00:15:41.709 "nvme_io_md": false, 00:15:41.709 "write_zeroes": true, 00:15:41.709 "zcopy": true, 00:15:41.709 "get_zone_info": false, 00:15:41.709 "zone_management": false, 00:15:41.709 "zone_append": false, 00:15:41.709 "compare": false, 00:15:41.709 "compare_and_write": false, 00:15:41.709 "abort": true, 00:15:41.709 "seek_hole": false, 00:15:41.709 "seek_data": false, 00:15:41.709 "copy": true, 00:15:41.709 "nvme_iov_md": false 00:15:41.709 }, 00:15:41.709 "memory_domains": [ 00:15:41.709 { 00:15:41.709 "dma_device_id": "system", 00:15:41.709 "dma_device_type": 1 00:15:41.709 }, 00:15:41.709 { 00:15:41.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.709 "dma_device_type": 2 00:15:41.709 } 00:15:41.709 ], 00:15:41.709 "driver_specific": {} 00:15:41.709 } 00:15:41.709 ] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 [2024-11-06 09:08:40.688882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.709 [2024-11-06 09:08:40.688936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.709 [2024-11-06 09:08:40.688963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.709 [2024-11-06 09:08:40.691122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.709 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.709 "name": "Existed_Raid", 00:15:41.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.709 "strip_size_kb": 64, 00:15:41.709 "state": "configuring", 00:15:41.709 "raid_level": "raid0", 00:15:41.709 "superblock": false, 00:15:41.709 "num_base_bdevs": 3, 00:15:41.709 "num_base_bdevs_discovered": 2, 00:15:41.709 "num_base_bdevs_operational": 3, 00:15:41.709 "base_bdevs_list": [ 00:15:41.709 { 00:15:41.709 "name": "BaseBdev1", 00:15:41.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.709 "is_configured": false, 00:15:41.709 "data_offset": 0, 00:15:41.709 "data_size": 0 00:15:41.709 }, 00:15:41.709 { 00:15:41.709 "name": "BaseBdev2", 00:15:41.709 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:41.709 "is_configured": true, 00:15:41.709 "data_offset": 0, 00:15:41.709 "data_size": 65536 00:15:41.709 }, 00:15:41.709 { 00:15:41.709 "name": "BaseBdev3", 00:15:41.709 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:41.709 "is_configured": true, 00:15:41.709 "data_offset": 0, 00:15:41.709 "data_size": 65536 00:15:41.709 } 00:15:41.709 ] 00:15:41.710 }' 00:15:41.710 09:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.710 09:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.288 [2024-11-06 09:08:41.096372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.288 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.289 "name": "Existed_Raid", 00:15:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.289 "strip_size_kb": 64, 00:15:42.289 "state": "configuring", 00:15:42.289 "raid_level": "raid0", 00:15:42.289 "superblock": false, 00:15:42.289 "num_base_bdevs": 3, 00:15:42.289 "num_base_bdevs_discovered": 1, 00:15:42.289 "num_base_bdevs_operational": 3, 00:15:42.289 "base_bdevs_list": [ 00:15:42.289 { 00:15:42.289 "name": "BaseBdev1", 00:15:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.289 "is_configured": false, 00:15:42.289 "data_offset": 0, 00:15:42.289 "data_size": 0 00:15:42.289 }, 00:15:42.289 { 00:15:42.289 "name": null, 00:15:42.289 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:42.289 "is_configured": false, 00:15:42.289 "data_offset": 0, 00:15:42.289 "data_size": 65536 00:15:42.289 }, 00:15:42.289 { 00:15:42.289 "name": "BaseBdev3", 00:15:42.289 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:42.289 "is_configured": true, 00:15:42.289 "data_offset": 0, 00:15:42.289 "data_size": 65536 00:15:42.289 } 00:15:42.289 ] 00:15:42.289 }' 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.289 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.548 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.806 [2024-11-06 09:08:41.607675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.806 BaseBdev1 00:15:42.806 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.806 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.807 [ 00:15:42.807 { 00:15:42.807 "name": "BaseBdev1", 00:15:42.807 "aliases": [ 00:15:42.807 "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce" 00:15:42.807 ], 00:15:42.807 "product_name": "Malloc disk", 00:15:42.807 "block_size": 512, 00:15:42.807 "num_blocks": 65536, 00:15:42.807 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:42.807 "assigned_rate_limits": { 00:15:42.807 "rw_ios_per_sec": 0, 00:15:42.807 "rw_mbytes_per_sec": 0, 00:15:42.807 "r_mbytes_per_sec": 0, 00:15:42.807 "w_mbytes_per_sec": 0 00:15:42.807 }, 00:15:42.807 "claimed": true, 00:15:42.807 "claim_type": "exclusive_write", 00:15:42.807 "zoned": false, 00:15:42.807 "supported_io_types": { 00:15:42.807 "read": true, 00:15:42.807 "write": true, 00:15:42.807 "unmap": true, 00:15:42.807 "flush": true, 00:15:42.807 "reset": true, 00:15:42.807 "nvme_admin": false, 00:15:42.807 "nvme_io": false, 00:15:42.807 "nvme_io_md": false, 00:15:42.807 "write_zeroes": true, 00:15:42.807 "zcopy": true, 00:15:42.807 "get_zone_info": false, 00:15:42.807 "zone_management": false, 00:15:42.807 "zone_append": false, 00:15:42.807 "compare": false, 00:15:42.807 "compare_and_write": false, 00:15:42.807 "abort": true, 00:15:42.807 "seek_hole": false, 00:15:42.807 "seek_data": false, 00:15:42.807 "copy": true, 00:15:42.807 "nvme_iov_md": false 00:15:42.807 }, 00:15:42.807 "memory_domains": [ 00:15:42.807 { 00:15:42.807 "dma_device_id": "system", 00:15:42.807 "dma_device_type": 1 00:15:42.807 }, 00:15:42.807 { 00:15:42.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.807 "dma_device_type": 2 00:15:42.807 } 00:15:42.807 ], 00:15:42.807 "driver_specific": {} 00:15:42.807 } 00:15:42.807 ] 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.807 "name": "Existed_Raid", 00:15:42.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.807 "strip_size_kb": 64, 00:15:42.807 "state": "configuring", 00:15:42.807 "raid_level": "raid0", 00:15:42.807 "superblock": false, 00:15:42.807 "num_base_bdevs": 3, 00:15:42.807 "num_base_bdevs_discovered": 2, 00:15:42.807 "num_base_bdevs_operational": 3, 00:15:42.807 "base_bdevs_list": [ 00:15:42.807 { 00:15:42.807 "name": "BaseBdev1", 00:15:42.807 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:42.807 "is_configured": true, 00:15:42.807 "data_offset": 0, 00:15:42.807 "data_size": 65536 00:15:42.807 }, 00:15:42.807 { 00:15:42.807 "name": null, 00:15:42.807 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:42.807 "is_configured": false, 00:15:42.807 "data_offset": 0, 00:15:42.807 "data_size": 65536 00:15:42.807 }, 00:15:42.807 { 00:15:42.807 "name": "BaseBdev3", 00:15:42.807 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:42.807 "is_configured": true, 00:15:42.807 "data_offset": 0, 00:15:42.807 "data_size": 65536 00:15:42.807 } 00:15:42.807 ] 00:15:42.807 }' 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.807 09:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.067 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.067 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:43.067 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.067 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.067 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.326 [2024-11-06 09:08:42.115066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.326 "name": "Existed_Raid", 00:15:43.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.326 "strip_size_kb": 64, 00:15:43.326 "state": "configuring", 00:15:43.326 "raid_level": "raid0", 00:15:43.326 "superblock": false, 00:15:43.326 "num_base_bdevs": 3, 00:15:43.326 "num_base_bdevs_discovered": 1, 00:15:43.326 "num_base_bdevs_operational": 3, 00:15:43.326 "base_bdevs_list": [ 00:15:43.326 { 00:15:43.326 "name": "BaseBdev1", 00:15:43.326 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:43.326 "is_configured": true, 00:15:43.326 "data_offset": 0, 00:15:43.326 "data_size": 65536 00:15:43.326 }, 00:15:43.326 { 00:15:43.326 "name": null, 00:15:43.326 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:43.326 "is_configured": false, 00:15:43.326 "data_offset": 0, 00:15:43.326 "data_size": 65536 00:15:43.326 }, 00:15:43.326 { 00:15:43.326 "name": null, 00:15:43.326 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:43.326 "is_configured": false, 00:15:43.326 "data_offset": 0, 00:15:43.326 "data_size": 65536 00:15:43.326 } 00:15:43.326 ] 00:15:43.326 }' 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.326 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.585 [2024-11-06 09:08:42.594437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.585 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.844 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.844 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.844 "name": "Existed_Raid", 00:15:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.844 "strip_size_kb": 64, 00:15:43.844 "state": "configuring", 00:15:43.844 "raid_level": "raid0", 00:15:43.844 "superblock": false, 00:15:43.844 "num_base_bdevs": 3, 00:15:43.844 "num_base_bdevs_discovered": 2, 00:15:43.844 "num_base_bdevs_operational": 3, 00:15:43.844 "base_bdevs_list": [ 00:15:43.844 { 00:15:43.844 "name": "BaseBdev1", 00:15:43.844 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:43.844 "is_configured": true, 00:15:43.844 "data_offset": 0, 00:15:43.844 "data_size": 65536 00:15:43.844 }, 00:15:43.844 { 00:15:43.844 "name": null, 00:15:43.844 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:43.844 "is_configured": false, 00:15:43.844 "data_offset": 0, 00:15:43.844 "data_size": 65536 00:15:43.844 }, 00:15:43.844 { 00:15:43.844 "name": "BaseBdev3", 00:15:43.844 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:43.844 "is_configured": true, 00:15:43.844 "data_offset": 0, 00:15:43.844 "data_size": 65536 00:15:43.844 } 00:15:43.844 ] 00:15:43.844 }' 00:15:43.844 09:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.844 09:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.102 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.102 [2024-11-06 09:08:43.085737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.362 "name": "Existed_Raid", 00:15:44.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.362 "strip_size_kb": 64, 00:15:44.362 "state": "configuring", 00:15:44.362 "raid_level": "raid0", 00:15:44.362 "superblock": false, 00:15:44.362 "num_base_bdevs": 3, 00:15:44.362 "num_base_bdevs_discovered": 1, 00:15:44.362 "num_base_bdevs_operational": 3, 00:15:44.362 "base_bdevs_list": [ 00:15:44.362 { 00:15:44.362 "name": null, 00:15:44.362 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:44.362 "is_configured": false, 00:15:44.362 "data_offset": 0, 00:15:44.362 "data_size": 65536 00:15:44.362 }, 00:15:44.362 { 00:15:44.362 "name": null, 00:15:44.362 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:44.362 "is_configured": false, 00:15:44.362 "data_offset": 0, 00:15:44.362 "data_size": 65536 00:15:44.362 }, 00:15:44.362 { 00:15:44.362 "name": "BaseBdev3", 00:15:44.362 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:44.362 "is_configured": true, 00:15:44.362 "data_offset": 0, 00:15:44.362 "data_size": 65536 00:15:44.362 } 00:15:44.362 ] 00:15:44.362 }' 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.362 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.621 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.621 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.621 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.621 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.621 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.879 [2024-11-06 09:08:43.679411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.879 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.880 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.880 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.880 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.880 "name": "Existed_Raid", 00:15:44.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.880 "strip_size_kb": 64, 00:15:44.880 "state": "configuring", 00:15:44.880 "raid_level": "raid0", 00:15:44.880 "superblock": false, 00:15:44.880 "num_base_bdevs": 3, 00:15:44.880 "num_base_bdevs_discovered": 2, 00:15:44.880 "num_base_bdevs_operational": 3, 00:15:44.880 "base_bdevs_list": [ 00:15:44.880 { 00:15:44.880 "name": null, 00:15:44.880 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:44.880 "is_configured": false, 00:15:44.880 "data_offset": 0, 00:15:44.880 "data_size": 65536 00:15:44.880 }, 00:15:44.880 { 00:15:44.880 "name": "BaseBdev2", 00:15:44.880 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:44.880 "is_configured": true, 00:15:44.880 "data_offset": 0, 00:15:44.880 "data_size": 65536 00:15:44.880 }, 00:15:44.880 { 00:15:44.880 "name": "BaseBdev3", 00:15:44.880 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:44.880 "is_configured": true, 00:15:44.880 "data_offset": 0, 00:15:44.880 "data_size": 65536 00:15:44.880 } 00:15:44.880 ] 00:15:44.880 }' 00:15:44.880 09:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.880 09:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.138 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.138 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.138 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.138 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.138 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.396 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:45.396 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:45.396 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.396 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.396 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b73cc428-0c9d-4097-bb8e-36ee36a2f2ce 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 [2024-11-06 09:08:44.252628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:45.397 [2024-11-06 09:08:44.252677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:45.397 [2024-11-06 09:08:44.252690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:45.397 [2024-11-06 09:08:44.252968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:45.397 [2024-11-06 09:08:44.253123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:45.397 [2024-11-06 09:08:44.253133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:45.397 [2024-11-06 09:08:44.253390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.397 NewBaseBdev 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 [ 00:15:45.397 { 00:15:45.397 "name": "NewBaseBdev", 00:15:45.397 "aliases": [ 00:15:45.397 "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce" 00:15:45.397 ], 00:15:45.397 "product_name": "Malloc disk", 00:15:45.397 "block_size": 512, 00:15:45.397 "num_blocks": 65536, 00:15:45.397 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:45.397 "assigned_rate_limits": { 00:15:45.397 "rw_ios_per_sec": 0, 00:15:45.397 "rw_mbytes_per_sec": 0, 00:15:45.397 "r_mbytes_per_sec": 0, 00:15:45.397 "w_mbytes_per_sec": 0 00:15:45.397 }, 00:15:45.397 "claimed": true, 00:15:45.397 "claim_type": "exclusive_write", 00:15:45.397 "zoned": false, 00:15:45.397 "supported_io_types": { 00:15:45.397 "read": true, 00:15:45.397 "write": true, 00:15:45.397 "unmap": true, 00:15:45.397 "flush": true, 00:15:45.397 "reset": true, 00:15:45.397 "nvme_admin": false, 00:15:45.397 "nvme_io": false, 00:15:45.397 "nvme_io_md": false, 00:15:45.397 "write_zeroes": true, 00:15:45.397 "zcopy": true, 00:15:45.397 "get_zone_info": false, 00:15:45.397 "zone_management": false, 00:15:45.397 "zone_append": false, 00:15:45.397 "compare": false, 00:15:45.397 "compare_and_write": false, 00:15:45.397 "abort": true, 00:15:45.397 "seek_hole": false, 00:15:45.397 "seek_data": false, 00:15:45.397 "copy": true, 00:15:45.397 "nvme_iov_md": false 00:15:45.397 }, 00:15:45.397 "memory_domains": [ 00:15:45.397 { 00:15:45.397 "dma_device_id": "system", 00:15:45.397 "dma_device_type": 1 00:15:45.397 }, 00:15:45.397 { 00:15:45.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.397 "dma_device_type": 2 00:15:45.397 } 00:15:45.397 ], 00:15:45.397 "driver_specific": {} 00:15:45.397 } 00:15:45.397 ] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.397 "name": "Existed_Raid", 00:15:45.397 "uuid": "38b10362-dd23-4971-ba0a-c40f39b9a83e", 00:15:45.397 "strip_size_kb": 64, 00:15:45.397 "state": "online", 00:15:45.397 "raid_level": "raid0", 00:15:45.397 "superblock": false, 00:15:45.397 "num_base_bdevs": 3, 00:15:45.397 "num_base_bdevs_discovered": 3, 00:15:45.397 "num_base_bdevs_operational": 3, 00:15:45.397 "base_bdevs_list": [ 00:15:45.397 { 00:15:45.397 "name": "NewBaseBdev", 00:15:45.397 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:45.397 "is_configured": true, 00:15:45.397 "data_offset": 0, 00:15:45.397 "data_size": 65536 00:15:45.397 }, 00:15:45.397 { 00:15:45.397 "name": "BaseBdev2", 00:15:45.397 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:45.397 "is_configured": true, 00:15:45.397 "data_offset": 0, 00:15:45.397 "data_size": 65536 00:15:45.397 }, 00:15:45.397 { 00:15:45.397 "name": "BaseBdev3", 00:15:45.397 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:45.397 "is_configured": true, 00:15:45.397 "data_offset": 0, 00:15:45.397 "data_size": 65536 00:15:45.397 } 00:15:45.397 ] 00:15:45.397 }' 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.397 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.977 [2024-11-06 09:08:44.736324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.977 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.977 "name": "Existed_Raid", 00:15:45.977 "aliases": [ 00:15:45.977 "38b10362-dd23-4971-ba0a-c40f39b9a83e" 00:15:45.977 ], 00:15:45.977 "product_name": "Raid Volume", 00:15:45.977 "block_size": 512, 00:15:45.977 "num_blocks": 196608, 00:15:45.977 "uuid": "38b10362-dd23-4971-ba0a-c40f39b9a83e", 00:15:45.977 "assigned_rate_limits": { 00:15:45.977 "rw_ios_per_sec": 0, 00:15:45.977 "rw_mbytes_per_sec": 0, 00:15:45.977 "r_mbytes_per_sec": 0, 00:15:45.977 "w_mbytes_per_sec": 0 00:15:45.977 }, 00:15:45.977 "claimed": false, 00:15:45.977 "zoned": false, 00:15:45.977 "supported_io_types": { 00:15:45.977 "read": true, 00:15:45.977 "write": true, 00:15:45.977 "unmap": true, 00:15:45.977 "flush": true, 00:15:45.977 "reset": true, 00:15:45.977 "nvme_admin": false, 00:15:45.977 "nvme_io": false, 00:15:45.977 "nvme_io_md": false, 00:15:45.977 "write_zeroes": true, 00:15:45.977 "zcopy": false, 00:15:45.977 "get_zone_info": false, 00:15:45.977 "zone_management": false, 00:15:45.977 "zone_append": false, 00:15:45.977 "compare": false, 00:15:45.977 "compare_and_write": false, 00:15:45.977 "abort": false, 00:15:45.977 "seek_hole": false, 00:15:45.977 "seek_data": false, 00:15:45.977 "copy": false, 00:15:45.977 "nvme_iov_md": false 00:15:45.977 }, 00:15:45.977 "memory_domains": [ 00:15:45.977 { 00:15:45.977 "dma_device_id": "system", 00:15:45.977 "dma_device_type": 1 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.977 "dma_device_type": 2 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "dma_device_id": "system", 00:15:45.977 "dma_device_type": 1 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.977 "dma_device_type": 2 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "dma_device_id": "system", 00:15:45.977 "dma_device_type": 1 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.977 "dma_device_type": 2 00:15:45.977 } 00:15:45.977 ], 00:15:45.977 "driver_specific": { 00:15:45.977 "raid": { 00:15:45.977 "uuid": "38b10362-dd23-4971-ba0a-c40f39b9a83e", 00:15:45.977 "strip_size_kb": 64, 00:15:45.977 "state": "online", 00:15:45.977 "raid_level": "raid0", 00:15:45.977 "superblock": false, 00:15:45.977 "num_base_bdevs": 3, 00:15:45.977 "num_base_bdevs_discovered": 3, 00:15:45.977 "num_base_bdevs_operational": 3, 00:15:45.977 "base_bdevs_list": [ 00:15:45.977 { 00:15:45.977 "name": "NewBaseBdev", 00:15:45.977 "uuid": "b73cc428-0c9d-4097-bb8e-36ee36a2f2ce", 00:15:45.977 "is_configured": true, 00:15:45.977 "data_offset": 0, 00:15:45.977 "data_size": 65536 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "name": "BaseBdev2", 00:15:45.977 "uuid": "ae8c606e-04ee-474f-9788-6abd5a693ae7", 00:15:45.977 "is_configured": true, 00:15:45.977 "data_offset": 0, 00:15:45.977 "data_size": 65536 00:15:45.977 }, 00:15:45.977 { 00:15:45.977 "name": "BaseBdev3", 00:15:45.977 "uuid": "2f739532-0f34-44f7-95e1-00848c318be9", 00:15:45.977 "is_configured": true, 00:15:45.977 "data_offset": 0, 00:15:45.977 "data_size": 65536 00:15:45.977 } 00:15:45.977 ] 00:15:45.978 } 00:15:45.978 } 00:15:45.978 }' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:45.978 BaseBdev2 00:15:45.978 BaseBdev3' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.978 [2024-11-06 09:08:44.979691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.978 [2024-11-06 09:08:44.979848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.978 [2024-11-06 09:08:44.979961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.978 [2024-11-06 09:08:44.980019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.978 [2024-11-06 09:08:44.980036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63599 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63599 ']' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63599 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:45.978 09:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63599 00:15:46.237 09:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.237 09:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.237 09:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63599' 00:15:46.237 killing process with pid 63599 00:15:46.237 09:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63599 00:15:46.237 [2024-11-06 09:08:45.035868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.237 09:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63599 00:15:46.496 [2024-11-06 09:08:45.344313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:47.875 00:15:47.875 real 0m10.477s 00:15:47.875 user 0m16.630s 00:15:47.875 sys 0m2.066s 00:15:47.875 ************************************ 00:15:47.875 END TEST raid_state_function_test 00:15:47.875 ************************************ 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.875 09:08:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:47.875 09:08:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:47.875 09:08:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:47.875 09:08:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.875 ************************************ 00:15:47.875 START TEST raid_state_function_test_sb 00:15:47.875 ************************************ 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:47.875 Process raid pid: 64220 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64220 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64220' 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64220 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64220 ']' 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.875 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:47.876 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.876 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:47.876 09:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.876 [2024-11-06 09:08:46.655950] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:47.876 [2024-11-06 09:08:46.656078] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.876 [2024-11-06 09:08:46.838171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.133 [2024-11-06 09:08:46.963128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.391 [2024-11-06 09:08:47.183192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.391 [2024-11-06 09:08:47.183245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.650 [2024-11-06 09:08:47.506506] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.650 [2024-11-06 09:08:47.506745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.650 [2024-11-06 09:08:47.506772] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.650 [2024-11-06 09:08:47.506788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.650 [2024-11-06 09:08:47.506808] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:48.650 [2024-11-06 09:08:47.506821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.650 "name": "Existed_Raid", 00:15:48.650 "uuid": "87f80a20-98bd-4552-9cfd-82d47d6e28dc", 00:15:48.650 "strip_size_kb": 64, 00:15:48.650 "state": "configuring", 00:15:48.650 "raid_level": "raid0", 00:15:48.650 "superblock": true, 00:15:48.650 "num_base_bdevs": 3, 00:15:48.650 "num_base_bdevs_discovered": 0, 00:15:48.650 "num_base_bdevs_operational": 3, 00:15:48.650 "base_bdevs_list": [ 00:15:48.650 { 00:15:48.650 "name": "BaseBdev1", 00:15:48.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.650 "is_configured": false, 00:15:48.650 "data_offset": 0, 00:15:48.650 "data_size": 0 00:15:48.650 }, 00:15:48.650 { 00:15:48.650 "name": "BaseBdev2", 00:15:48.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.650 "is_configured": false, 00:15:48.650 "data_offset": 0, 00:15:48.650 "data_size": 0 00:15:48.650 }, 00:15:48.650 { 00:15:48.650 "name": "BaseBdev3", 00:15:48.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.650 "is_configured": false, 00:15:48.650 "data_offset": 0, 00:15:48.650 "data_size": 0 00:15:48.650 } 00:15:48.650 ] 00:15:48.650 }' 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.650 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 [2024-11-06 09:08:47.957826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.219 [2024-11-06 09:08:47.958046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 [2024-11-06 09:08:47.969794] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.219 [2024-11-06 09:08:47.969847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.219 [2024-11-06 09:08:47.969858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.219 [2024-11-06 09:08:47.969871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.219 [2024-11-06 09:08:47.969879] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.219 [2024-11-06 09:08:47.969892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 09:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 [2024-11-06 09:08:48.019508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.219 BaseBdev1 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 [ 00:15:49.219 { 00:15:49.219 "name": "BaseBdev1", 00:15:49.219 "aliases": [ 00:15:49.219 "3762c689-4144-40e7-97e0-bebded06e8d8" 00:15:49.219 ], 00:15:49.219 "product_name": "Malloc disk", 00:15:49.219 "block_size": 512, 00:15:49.219 "num_blocks": 65536, 00:15:49.219 "uuid": "3762c689-4144-40e7-97e0-bebded06e8d8", 00:15:49.219 "assigned_rate_limits": { 00:15:49.219 "rw_ios_per_sec": 0, 00:15:49.219 "rw_mbytes_per_sec": 0, 00:15:49.219 "r_mbytes_per_sec": 0, 00:15:49.219 "w_mbytes_per_sec": 0 00:15:49.219 }, 00:15:49.219 "claimed": true, 00:15:49.219 "claim_type": "exclusive_write", 00:15:49.219 "zoned": false, 00:15:49.219 "supported_io_types": { 00:15:49.219 "read": true, 00:15:49.219 "write": true, 00:15:49.219 "unmap": true, 00:15:49.219 "flush": true, 00:15:49.219 "reset": true, 00:15:49.219 "nvme_admin": false, 00:15:49.219 "nvme_io": false, 00:15:49.219 "nvme_io_md": false, 00:15:49.219 "write_zeroes": true, 00:15:49.219 "zcopy": true, 00:15:49.219 "get_zone_info": false, 00:15:49.219 "zone_management": false, 00:15:49.219 "zone_append": false, 00:15:49.219 "compare": false, 00:15:49.219 "compare_and_write": false, 00:15:49.219 "abort": true, 00:15:49.219 "seek_hole": false, 00:15:49.219 "seek_data": false, 00:15:49.219 "copy": true, 00:15:49.219 "nvme_iov_md": false 00:15:49.219 }, 00:15:49.219 "memory_domains": [ 00:15:49.219 { 00:15:49.219 "dma_device_id": "system", 00:15:49.219 "dma_device_type": 1 00:15:49.219 }, 00:15:49.219 { 00:15:49.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.219 "dma_device_type": 2 00:15:49.219 } 00:15:49.219 ], 00:15:49.219 "driver_specific": {} 00:15:49.219 } 00:15:49.219 ] 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.219 "name": "Existed_Raid", 00:15:49.219 "uuid": "88aef3cc-7c2f-4ea3-aadc-997eba1fa7ef", 00:15:49.219 "strip_size_kb": 64, 00:15:49.219 "state": "configuring", 00:15:49.219 "raid_level": "raid0", 00:15:49.219 "superblock": true, 00:15:49.219 "num_base_bdevs": 3, 00:15:49.219 "num_base_bdevs_discovered": 1, 00:15:49.219 "num_base_bdevs_operational": 3, 00:15:49.219 "base_bdevs_list": [ 00:15:49.219 { 00:15:49.219 "name": "BaseBdev1", 00:15:49.219 "uuid": "3762c689-4144-40e7-97e0-bebded06e8d8", 00:15:49.219 "is_configured": true, 00:15:49.219 "data_offset": 2048, 00:15:49.219 "data_size": 63488 00:15:49.219 }, 00:15:49.219 { 00:15:49.219 "name": "BaseBdev2", 00:15:49.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.219 "is_configured": false, 00:15:49.219 "data_offset": 0, 00:15:49.219 "data_size": 0 00:15:49.219 }, 00:15:49.219 { 00:15:49.219 "name": "BaseBdev3", 00:15:49.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.219 "is_configured": false, 00:15:49.219 "data_offset": 0, 00:15:49.220 "data_size": 0 00:15:49.220 } 00:15:49.220 ] 00:15:49.220 }' 00:15:49.220 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.220 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.786 [2024-11-06 09:08:48.542885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.786 [2024-11-06 09:08:48.543096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.786 [2024-11-06 09:08:48.550945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.786 [2024-11-06 09:08:48.553269] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.786 [2024-11-06 09:08:48.553335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.786 [2024-11-06 09:08:48.553347] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.786 [2024-11-06 09:08:48.553360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.786 "name": "Existed_Raid", 00:15:49.786 "uuid": "1c25ec73-220e-47e7-abe6-83af32be3db8", 00:15:49.786 "strip_size_kb": 64, 00:15:49.786 "state": "configuring", 00:15:49.786 "raid_level": "raid0", 00:15:49.786 "superblock": true, 00:15:49.786 "num_base_bdevs": 3, 00:15:49.786 "num_base_bdevs_discovered": 1, 00:15:49.786 "num_base_bdevs_operational": 3, 00:15:49.786 "base_bdevs_list": [ 00:15:49.786 { 00:15:49.786 "name": "BaseBdev1", 00:15:49.786 "uuid": "3762c689-4144-40e7-97e0-bebded06e8d8", 00:15:49.786 "is_configured": true, 00:15:49.786 "data_offset": 2048, 00:15:49.786 "data_size": 63488 00:15:49.786 }, 00:15:49.786 { 00:15:49.786 "name": "BaseBdev2", 00:15:49.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.786 "is_configured": false, 00:15:49.786 "data_offset": 0, 00:15:49.786 "data_size": 0 00:15:49.786 }, 00:15:49.786 { 00:15:49.786 "name": "BaseBdev3", 00:15:49.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.786 "is_configured": false, 00:15:49.786 "data_offset": 0, 00:15:49.786 "data_size": 0 00:15:49.786 } 00:15:49.786 ] 00:15:49.786 }' 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.786 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.046 09:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.046 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.046 09:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.046 [2024-11-06 09:08:49.039585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.046 BaseBdev2 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.046 [ 00:15:50.046 { 00:15:50.046 "name": "BaseBdev2", 00:15:50.046 "aliases": [ 00:15:50.046 "b72f98da-46a9-4dbf-b97d-9deb941900fb" 00:15:50.046 ], 00:15:50.046 "product_name": "Malloc disk", 00:15:50.046 "block_size": 512, 00:15:50.046 "num_blocks": 65536, 00:15:50.046 "uuid": "b72f98da-46a9-4dbf-b97d-9deb941900fb", 00:15:50.046 "assigned_rate_limits": { 00:15:50.046 "rw_ios_per_sec": 0, 00:15:50.046 "rw_mbytes_per_sec": 0, 00:15:50.046 "r_mbytes_per_sec": 0, 00:15:50.046 "w_mbytes_per_sec": 0 00:15:50.046 }, 00:15:50.046 "claimed": true, 00:15:50.046 "claim_type": "exclusive_write", 00:15:50.046 "zoned": false, 00:15:50.046 "supported_io_types": { 00:15:50.046 "read": true, 00:15:50.046 "write": true, 00:15:50.046 "unmap": true, 00:15:50.046 "flush": true, 00:15:50.046 "reset": true, 00:15:50.046 "nvme_admin": false, 00:15:50.046 "nvme_io": false, 00:15:50.046 "nvme_io_md": false, 00:15:50.046 "write_zeroes": true, 00:15:50.046 "zcopy": true, 00:15:50.046 "get_zone_info": false, 00:15:50.046 "zone_management": false, 00:15:50.046 "zone_append": false, 00:15:50.046 "compare": false, 00:15:50.046 "compare_and_write": false, 00:15:50.046 "abort": true, 00:15:50.046 "seek_hole": false, 00:15:50.046 "seek_data": false, 00:15:50.046 "copy": true, 00:15:50.046 "nvme_iov_md": false 00:15:50.046 }, 00:15:50.046 "memory_domains": [ 00:15:50.046 { 00:15:50.046 "dma_device_id": "system", 00:15:50.046 "dma_device_type": 1 00:15:50.046 }, 00:15:50.046 { 00:15:50.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.046 "dma_device_type": 2 00:15:50.046 } 00:15:50.046 ], 00:15:50.046 "driver_specific": {} 00:15:50.046 } 00:15:50.046 ] 00:15:50.046 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.305 "name": "Existed_Raid", 00:15:50.305 "uuid": "1c25ec73-220e-47e7-abe6-83af32be3db8", 00:15:50.305 "strip_size_kb": 64, 00:15:50.305 "state": "configuring", 00:15:50.305 "raid_level": "raid0", 00:15:50.305 "superblock": true, 00:15:50.305 "num_base_bdevs": 3, 00:15:50.305 "num_base_bdevs_discovered": 2, 00:15:50.305 "num_base_bdevs_operational": 3, 00:15:50.305 "base_bdevs_list": [ 00:15:50.305 { 00:15:50.305 "name": "BaseBdev1", 00:15:50.305 "uuid": "3762c689-4144-40e7-97e0-bebded06e8d8", 00:15:50.305 "is_configured": true, 00:15:50.305 "data_offset": 2048, 00:15:50.305 "data_size": 63488 00:15:50.305 }, 00:15:50.305 { 00:15:50.305 "name": "BaseBdev2", 00:15:50.305 "uuid": "b72f98da-46a9-4dbf-b97d-9deb941900fb", 00:15:50.305 "is_configured": true, 00:15:50.305 "data_offset": 2048, 00:15:50.305 "data_size": 63488 00:15:50.305 }, 00:15:50.305 { 00:15:50.305 "name": "BaseBdev3", 00:15:50.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.305 "is_configured": false, 00:15:50.305 "data_offset": 0, 00:15:50.305 "data_size": 0 00:15:50.305 } 00:15:50.305 ] 00:15:50.305 }' 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.305 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.563 [2024-11-06 09:08:49.593969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.563 [2024-11-06 09:08:49.594406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:50.563 [2024-11-06 09:08:49.594447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:50.563 BaseBdev3 00:15:50.563 [2024-11-06 09:08:49.594879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:50.563 [2024-11-06 09:08:49.595148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:50.563 [2024-11-06 09:08:49.595172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:50.563 [2024-11-06 09:08:49.595434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.563 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.825 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.825 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:50.825 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.825 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.825 [ 00:15:50.825 { 00:15:50.825 "name": "BaseBdev3", 00:15:50.825 "aliases": [ 00:15:50.825 "710e23c4-e1d7-480d-9ddc-d0b57ef8b57d" 00:15:50.825 ], 00:15:50.825 "product_name": "Malloc disk", 00:15:50.825 "block_size": 512, 00:15:50.825 "num_blocks": 65536, 00:15:50.825 "uuid": "710e23c4-e1d7-480d-9ddc-d0b57ef8b57d", 00:15:50.825 "assigned_rate_limits": { 00:15:50.825 "rw_ios_per_sec": 0, 00:15:50.825 "rw_mbytes_per_sec": 0, 00:15:50.825 "r_mbytes_per_sec": 0, 00:15:50.825 "w_mbytes_per_sec": 0 00:15:50.825 }, 00:15:50.825 "claimed": true, 00:15:50.825 "claim_type": "exclusive_write", 00:15:50.825 "zoned": false, 00:15:50.825 "supported_io_types": { 00:15:50.825 "read": true, 00:15:50.825 "write": true, 00:15:50.825 "unmap": true, 00:15:50.825 "flush": true, 00:15:50.825 "reset": true, 00:15:50.825 "nvme_admin": false, 00:15:50.825 "nvme_io": false, 00:15:50.825 "nvme_io_md": false, 00:15:50.825 "write_zeroes": true, 00:15:50.826 "zcopy": true, 00:15:50.826 "get_zone_info": false, 00:15:50.826 "zone_management": false, 00:15:50.826 "zone_append": false, 00:15:50.826 "compare": false, 00:15:50.826 "compare_and_write": false, 00:15:50.826 "abort": true, 00:15:50.826 "seek_hole": false, 00:15:50.826 "seek_data": false, 00:15:50.826 "copy": true, 00:15:50.826 "nvme_iov_md": false 00:15:50.826 }, 00:15:50.826 "memory_domains": [ 00:15:50.826 { 00:15:50.826 "dma_device_id": "system", 00:15:50.826 "dma_device_type": 1 00:15:50.826 }, 00:15:50.826 { 00:15:50.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.826 "dma_device_type": 2 00:15:50.826 } 00:15:50.826 ], 00:15:50.826 "driver_specific": {} 00:15:50.826 } 00:15:50.826 ] 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.826 "name": "Existed_Raid", 00:15:50.826 "uuid": "1c25ec73-220e-47e7-abe6-83af32be3db8", 00:15:50.826 "strip_size_kb": 64, 00:15:50.826 "state": "online", 00:15:50.826 "raid_level": "raid0", 00:15:50.826 "superblock": true, 00:15:50.826 "num_base_bdevs": 3, 00:15:50.826 "num_base_bdevs_discovered": 3, 00:15:50.826 "num_base_bdevs_operational": 3, 00:15:50.826 "base_bdevs_list": [ 00:15:50.826 { 00:15:50.826 "name": "BaseBdev1", 00:15:50.826 "uuid": "3762c689-4144-40e7-97e0-bebded06e8d8", 00:15:50.826 "is_configured": true, 00:15:50.826 "data_offset": 2048, 00:15:50.826 "data_size": 63488 00:15:50.826 }, 00:15:50.826 { 00:15:50.826 "name": "BaseBdev2", 00:15:50.826 "uuid": "b72f98da-46a9-4dbf-b97d-9deb941900fb", 00:15:50.826 "is_configured": true, 00:15:50.826 "data_offset": 2048, 00:15:50.826 "data_size": 63488 00:15:50.826 }, 00:15:50.826 { 00:15:50.826 "name": "BaseBdev3", 00:15:50.826 "uuid": "710e23c4-e1d7-480d-9ddc-d0b57ef8b57d", 00:15:50.826 "is_configured": true, 00:15:50.826 "data_offset": 2048, 00:15:50.826 "data_size": 63488 00:15:50.826 } 00:15:50.826 ] 00:15:50.826 }' 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.826 09:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.086 [2024-11-06 09:08:50.062021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.086 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.086 "name": "Existed_Raid", 00:15:51.086 "aliases": [ 00:15:51.086 "1c25ec73-220e-47e7-abe6-83af32be3db8" 00:15:51.086 ], 00:15:51.086 "product_name": "Raid Volume", 00:15:51.086 "block_size": 512, 00:15:51.086 "num_blocks": 190464, 00:15:51.086 "uuid": "1c25ec73-220e-47e7-abe6-83af32be3db8", 00:15:51.086 "assigned_rate_limits": { 00:15:51.086 "rw_ios_per_sec": 0, 00:15:51.086 "rw_mbytes_per_sec": 0, 00:15:51.086 "r_mbytes_per_sec": 0, 00:15:51.086 "w_mbytes_per_sec": 0 00:15:51.086 }, 00:15:51.086 "claimed": false, 00:15:51.086 "zoned": false, 00:15:51.086 "supported_io_types": { 00:15:51.086 "read": true, 00:15:51.086 "write": true, 00:15:51.086 "unmap": true, 00:15:51.086 "flush": true, 00:15:51.086 "reset": true, 00:15:51.086 "nvme_admin": false, 00:15:51.086 "nvme_io": false, 00:15:51.086 "nvme_io_md": false, 00:15:51.086 "write_zeroes": true, 00:15:51.086 "zcopy": false, 00:15:51.086 "get_zone_info": false, 00:15:51.086 "zone_management": false, 00:15:51.086 "zone_append": false, 00:15:51.086 "compare": false, 00:15:51.086 "compare_and_write": false, 00:15:51.086 "abort": false, 00:15:51.086 "seek_hole": false, 00:15:51.086 "seek_data": false, 00:15:51.086 "copy": false, 00:15:51.086 "nvme_iov_md": false 00:15:51.086 }, 00:15:51.086 "memory_domains": [ 00:15:51.086 { 00:15:51.086 "dma_device_id": "system", 00:15:51.086 "dma_device_type": 1 00:15:51.086 }, 00:15:51.086 { 00:15:51.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.086 "dma_device_type": 2 00:15:51.086 }, 00:15:51.086 { 00:15:51.086 "dma_device_id": "system", 00:15:51.086 "dma_device_type": 1 00:15:51.086 }, 00:15:51.086 { 00:15:51.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.086 "dma_device_type": 2 00:15:51.086 }, 00:15:51.086 { 00:15:51.086 "dma_device_id": "system", 00:15:51.086 "dma_device_type": 1 00:15:51.086 }, 00:15:51.086 { 00:15:51.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.086 "dma_device_type": 2 00:15:51.086 } 00:15:51.086 ], 00:15:51.086 "driver_specific": { 00:15:51.086 "raid": { 00:15:51.086 "uuid": "1c25ec73-220e-47e7-abe6-83af32be3db8", 00:15:51.086 "strip_size_kb": 64, 00:15:51.086 "state": "online", 00:15:51.086 "raid_level": "raid0", 00:15:51.086 "superblock": true, 00:15:51.086 "num_base_bdevs": 3, 00:15:51.086 "num_base_bdevs_discovered": 3, 00:15:51.086 "num_base_bdevs_operational": 3, 00:15:51.086 "base_bdevs_list": [ 00:15:51.086 { 00:15:51.086 "name": "BaseBdev1", 00:15:51.086 "uuid": "3762c689-4144-40e7-97e0-bebded06e8d8", 00:15:51.086 "is_configured": true, 00:15:51.086 "data_offset": 2048, 00:15:51.087 "data_size": 63488 00:15:51.087 }, 00:15:51.087 { 00:15:51.087 "name": "BaseBdev2", 00:15:51.087 "uuid": "b72f98da-46a9-4dbf-b97d-9deb941900fb", 00:15:51.087 "is_configured": true, 00:15:51.087 "data_offset": 2048, 00:15:51.087 "data_size": 63488 00:15:51.087 }, 00:15:51.087 { 00:15:51.087 "name": "BaseBdev3", 00:15:51.087 "uuid": "710e23c4-e1d7-480d-9ddc-d0b57ef8b57d", 00:15:51.087 "is_configured": true, 00:15:51.087 "data_offset": 2048, 00:15:51.087 "data_size": 63488 00:15:51.087 } 00:15:51.087 ] 00:15:51.087 } 00:15:51.087 } 00:15:51.087 }' 00:15:51.087 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:51.345 BaseBdev2 00:15:51.345 BaseBdev3' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.345 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.346 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 [2024-11-06 09:08:50.329781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.346 [2024-11-06 09:08:50.329813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.346 [2024-11-06 09:08:50.329871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.604 "name": "Existed_Raid", 00:15:51.604 "uuid": "1c25ec73-220e-47e7-abe6-83af32be3db8", 00:15:51.604 "strip_size_kb": 64, 00:15:51.604 "state": "offline", 00:15:51.604 "raid_level": "raid0", 00:15:51.604 "superblock": true, 00:15:51.604 "num_base_bdevs": 3, 00:15:51.604 "num_base_bdevs_discovered": 2, 00:15:51.604 "num_base_bdevs_operational": 2, 00:15:51.604 "base_bdevs_list": [ 00:15:51.604 { 00:15:51.604 "name": null, 00:15:51.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.604 "is_configured": false, 00:15:51.604 "data_offset": 0, 00:15:51.604 "data_size": 63488 00:15:51.604 }, 00:15:51.604 { 00:15:51.604 "name": "BaseBdev2", 00:15:51.604 "uuid": "b72f98da-46a9-4dbf-b97d-9deb941900fb", 00:15:51.604 "is_configured": true, 00:15:51.604 "data_offset": 2048, 00:15:51.604 "data_size": 63488 00:15:51.604 }, 00:15:51.604 { 00:15:51.604 "name": "BaseBdev3", 00:15:51.604 "uuid": "710e23c4-e1d7-480d-9ddc-d0b57ef8b57d", 00:15:51.604 "is_configured": true, 00:15:51.604 "data_offset": 2048, 00:15:51.604 "data_size": 63488 00:15:51.604 } 00:15:51.604 ] 00:15:51.604 }' 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.604 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.171 09:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.171 [2024-11-06 09:08:50.988220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.171 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.171 [2024-11-06 09:08:51.144481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:52.171 [2024-11-06 09:08:51.144554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 BaseBdev2 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:52.429 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 [ 00:15:52.430 { 00:15:52.430 "name": "BaseBdev2", 00:15:52.430 "aliases": [ 00:15:52.430 "c05c3e7d-fb11-443b-a31b-d929b4140bab" 00:15:52.430 ], 00:15:52.430 "product_name": "Malloc disk", 00:15:52.430 "block_size": 512, 00:15:52.430 "num_blocks": 65536, 00:15:52.430 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:52.430 "assigned_rate_limits": { 00:15:52.430 "rw_ios_per_sec": 0, 00:15:52.430 "rw_mbytes_per_sec": 0, 00:15:52.430 "r_mbytes_per_sec": 0, 00:15:52.430 "w_mbytes_per_sec": 0 00:15:52.430 }, 00:15:52.430 "claimed": false, 00:15:52.430 "zoned": false, 00:15:52.430 "supported_io_types": { 00:15:52.430 "read": true, 00:15:52.430 "write": true, 00:15:52.430 "unmap": true, 00:15:52.430 "flush": true, 00:15:52.430 "reset": true, 00:15:52.430 "nvme_admin": false, 00:15:52.430 "nvme_io": false, 00:15:52.430 "nvme_io_md": false, 00:15:52.430 "write_zeroes": true, 00:15:52.430 "zcopy": true, 00:15:52.430 "get_zone_info": false, 00:15:52.430 "zone_management": false, 00:15:52.430 "zone_append": false, 00:15:52.430 "compare": false, 00:15:52.430 "compare_and_write": false, 00:15:52.430 "abort": true, 00:15:52.430 "seek_hole": false, 00:15:52.430 "seek_data": false, 00:15:52.430 "copy": true, 00:15:52.430 "nvme_iov_md": false 00:15:52.430 }, 00:15:52.430 "memory_domains": [ 00:15:52.430 { 00:15:52.430 "dma_device_id": "system", 00:15:52.430 "dma_device_type": 1 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.430 "dma_device_type": 2 00:15:52.430 } 00:15:52.430 ], 00:15:52.430 "driver_specific": {} 00:15:52.430 } 00:15:52.430 ] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 BaseBdev3 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 [ 00:15:52.430 { 00:15:52.430 "name": "BaseBdev3", 00:15:52.430 "aliases": [ 00:15:52.430 "8284c55c-d52b-4854-a513-922162865928" 00:15:52.430 ], 00:15:52.430 "product_name": "Malloc disk", 00:15:52.430 "block_size": 512, 00:15:52.430 "num_blocks": 65536, 00:15:52.430 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:52.430 "assigned_rate_limits": { 00:15:52.430 "rw_ios_per_sec": 0, 00:15:52.430 "rw_mbytes_per_sec": 0, 00:15:52.430 "r_mbytes_per_sec": 0, 00:15:52.430 "w_mbytes_per_sec": 0 00:15:52.430 }, 00:15:52.430 "claimed": false, 00:15:52.430 "zoned": false, 00:15:52.430 "supported_io_types": { 00:15:52.430 "read": true, 00:15:52.688 "write": true, 00:15:52.688 "unmap": true, 00:15:52.688 "flush": true, 00:15:52.688 "reset": true, 00:15:52.688 "nvme_admin": false, 00:15:52.688 "nvme_io": false, 00:15:52.688 "nvme_io_md": false, 00:15:52.688 "write_zeroes": true, 00:15:52.688 "zcopy": true, 00:15:52.688 "get_zone_info": false, 00:15:52.688 "zone_management": false, 00:15:52.688 "zone_append": false, 00:15:52.688 "compare": false, 00:15:52.688 "compare_and_write": false, 00:15:52.688 "abort": true, 00:15:52.688 "seek_hole": false, 00:15:52.688 "seek_data": false, 00:15:52.688 "copy": true, 00:15:52.688 "nvme_iov_md": false 00:15:52.688 }, 00:15:52.688 "memory_domains": [ 00:15:52.688 { 00:15:52.688 "dma_device_id": "system", 00:15:52.688 "dma_device_type": 1 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.688 "dma_device_type": 2 00:15:52.688 } 00:15:52.688 ], 00:15:52.688 "driver_specific": {} 00:15:52.688 } 00:15:52.688 ] 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.689 [2024-11-06 09:08:51.488580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.689 [2024-11-06 09:08:51.488773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.689 [2024-11-06 09:08:51.488893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.689 [2024-11-06 09:08:51.491553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.689 "name": "Existed_Raid", 00:15:52.689 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:52.689 "strip_size_kb": 64, 00:15:52.689 "state": "configuring", 00:15:52.689 "raid_level": "raid0", 00:15:52.689 "superblock": true, 00:15:52.689 "num_base_bdevs": 3, 00:15:52.689 "num_base_bdevs_discovered": 2, 00:15:52.689 "num_base_bdevs_operational": 3, 00:15:52.689 "base_bdevs_list": [ 00:15:52.689 { 00:15:52.689 "name": "BaseBdev1", 00:15:52.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.689 "is_configured": false, 00:15:52.689 "data_offset": 0, 00:15:52.689 "data_size": 0 00:15:52.689 }, 00:15:52.689 { 00:15:52.689 "name": "BaseBdev2", 00:15:52.689 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:52.689 "is_configured": true, 00:15:52.689 "data_offset": 2048, 00:15:52.689 "data_size": 63488 00:15:52.689 }, 00:15:52.689 { 00:15:52.689 "name": "BaseBdev3", 00:15:52.689 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:52.689 "is_configured": true, 00:15:52.689 "data_offset": 2048, 00:15:52.689 "data_size": 63488 00:15:52.689 } 00:15:52.689 ] 00:15:52.689 }' 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.689 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.947 [2024-11-06 09:08:51.912033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.947 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.947 "name": "Existed_Raid", 00:15:52.947 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:52.947 "strip_size_kb": 64, 00:15:52.947 "state": "configuring", 00:15:52.947 "raid_level": "raid0", 00:15:52.947 "superblock": true, 00:15:52.947 "num_base_bdevs": 3, 00:15:52.947 "num_base_bdevs_discovered": 1, 00:15:52.947 "num_base_bdevs_operational": 3, 00:15:52.947 "base_bdevs_list": [ 00:15:52.947 { 00:15:52.947 "name": "BaseBdev1", 00:15:52.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.947 "is_configured": false, 00:15:52.947 "data_offset": 0, 00:15:52.947 "data_size": 0 00:15:52.947 }, 00:15:52.947 { 00:15:52.947 "name": null, 00:15:52.947 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:52.947 "is_configured": false, 00:15:52.947 "data_offset": 0, 00:15:52.947 "data_size": 63488 00:15:52.947 }, 00:15:52.947 { 00:15:52.947 "name": "BaseBdev3", 00:15:52.947 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:52.948 "is_configured": true, 00:15:52.948 "data_offset": 2048, 00:15:52.948 "data_size": 63488 00:15:52.948 } 00:15:52.948 ] 00:15:52.948 }' 00:15:52.948 09:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.948 09:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 [2024-11-06 09:08:52.406012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.514 BaseBdev1 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.514 [ 00:15:53.514 { 00:15:53.514 "name": "BaseBdev1", 00:15:53.514 "aliases": [ 00:15:53.514 "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4" 00:15:53.514 ], 00:15:53.514 "product_name": "Malloc disk", 00:15:53.514 "block_size": 512, 00:15:53.514 "num_blocks": 65536, 00:15:53.514 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:53.514 "assigned_rate_limits": { 00:15:53.514 "rw_ios_per_sec": 0, 00:15:53.514 "rw_mbytes_per_sec": 0, 00:15:53.514 "r_mbytes_per_sec": 0, 00:15:53.514 "w_mbytes_per_sec": 0 00:15:53.514 }, 00:15:53.514 "claimed": true, 00:15:53.514 "claim_type": "exclusive_write", 00:15:53.514 "zoned": false, 00:15:53.514 "supported_io_types": { 00:15:53.514 "read": true, 00:15:53.514 "write": true, 00:15:53.514 "unmap": true, 00:15:53.514 "flush": true, 00:15:53.514 "reset": true, 00:15:53.514 "nvme_admin": false, 00:15:53.514 "nvme_io": false, 00:15:53.514 "nvme_io_md": false, 00:15:53.514 "write_zeroes": true, 00:15:53.514 "zcopy": true, 00:15:53.514 "get_zone_info": false, 00:15:53.514 "zone_management": false, 00:15:53.514 "zone_append": false, 00:15:53.514 "compare": false, 00:15:53.514 "compare_and_write": false, 00:15:53.514 "abort": true, 00:15:53.514 "seek_hole": false, 00:15:53.514 "seek_data": false, 00:15:53.514 "copy": true, 00:15:53.514 "nvme_iov_md": false 00:15:53.514 }, 00:15:53.514 "memory_domains": [ 00:15:53.514 { 00:15:53.514 "dma_device_id": "system", 00:15:53.514 "dma_device_type": 1 00:15:53.514 }, 00:15:53.514 { 00:15:53.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.514 "dma_device_type": 2 00:15:53.514 } 00:15:53.514 ], 00:15:53.514 "driver_specific": {} 00:15:53.514 } 00:15:53.514 ] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.514 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.515 "name": "Existed_Raid", 00:15:53.515 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:53.515 "strip_size_kb": 64, 00:15:53.515 "state": "configuring", 00:15:53.515 "raid_level": "raid0", 00:15:53.515 "superblock": true, 00:15:53.515 "num_base_bdevs": 3, 00:15:53.515 "num_base_bdevs_discovered": 2, 00:15:53.515 "num_base_bdevs_operational": 3, 00:15:53.515 "base_bdevs_list": [ 00:15:53.515 { 00:15:53.515 "name": "BaseBdev1", 00:15:53.515 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:53.515 "is_configured": true, 00:15:53.515 "data_offset": 2048, 00:15:53.515 "data_size": 63488 00:15:53.515 }, 00:15:53.515 { 00:15:53.515 "name": null, 00:15:53.515 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:53.515 "is_configured": false, 00:15:53.515 "data_offset": 0, 00:15:53.515 "data_size": 63488 00:15:53.515 }, 00:15:53.515 { 00:15:53.515 "name": "BaseBdev3", 00:15:53.515 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:53.515 "is_configured": true, 00:15:53.515 "data_offset": 2048, 00:15:53.515 "data_size": 63488 00:15:53.515 } 00:15:53.515 ] 00:15:53.515 }' 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.515 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.082 [2024-11-06 09:08:52.913474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.082 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.082 "name": "Existed_Raid", 00:15:54.082 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:54.082 "strip_size_kb": 64, 00:15:54.082 "state": "configuring", 00:15:54.082 "raid_level": "raid0", 00:15:54.082 "superblock": true, 00:15:54.082 "num_base_bdevs": 3, 00:15:54.082 "num_base_bdevs_discovered": 1, 00:15:54.082 "num_base_bdevs_operational": 3, 00:15:54.082 "base_bdevs_list": [ 00:15:54.082 { 00:15:54.082 "name": "BaseBdev1", 00:15:54.082 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:54.083 "is_configured": true, 00:15:54.083 "data_offset": 2048, 00:15:54.083 "data_size": 63488 00:15:54.083 }, 00:15:54.083 { 00:15:54.083 "name": null, 00:15:54.083 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:54.083 "is_configured": false, 00:15:54.083 "data_offset": 0, 00:15:54.083 "data_size": 63488 00:15:54.083 }, 00:15:54.083 { 00:15:54.083 "name": null, 00:15:54.083 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:54.083 "is_configured": false, 00:15:54.083 "data_offset": 0, 00:15:54.083 "data_size": 63488 00:15:54.083 } 00:15:54.083 ] 00:15:54.083 }' 00:15:54.083 09:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.083 09:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.341 [2024-11-06 09:08:53.345132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.341 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.600 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.600 "name": "Existed_Raid", 00:15:54.600 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:54.600 "strip_size_kb": 64, 00:15:54.600 "state": "configuring", 00:15:54.600 "raid_level": "raid0", 00:15:54.600 "superblock": true, 00:15:54.600 "num_base_bdevs": 3, 00:15:54.600 "num_base_bdevs_discovered": 2, 00:15:54.600 "num_base_bdevs_operational": 3, 00:15:54.600 "base_bdevs_list": [ 00:15:54.600 { 00:15:54.600 "name": "BaseBdev1", 00:15:54.600 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:54.600 "is_configured": true, 00:15:54.600 "data_offset": 2048, 00:15:54.600 "data_size": 63488 00:15:54.600 }, 00:15:54.600 { 00:15:54.600 "name": null, 00:15:54.601 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:54.601 "is_configured": false, 00:15:54.601 "data_offset": 0, 00:15:54.601 "data_size": 63488 00:15:54.601 }, 00:15:54.601 { 00:15:54.601 "name": "BaseBdev3", 00:15:54.601 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:54.601 "is_configured": true, 00:15:54.601 "data_offset": 2048, 00:15:54.601 "data_size": 63488 00:15:54.601 } 00:15:54.601 ] 00:15:54.601 }' 00:15:54.601 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.601 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.860 [2024-11-06 09:08:53.792509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.860 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.118 "name": "Existed_Raid", 00:15:55.118 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:55.118 "strip_size_kb": 64, 00:15:55.118 "state": "configuring", 00:15:55.118 "raid_level": "raid0", 00:15:55.118 "superblock": true, 00:15:55.118 "num_base_bdevs": 3, 00:15:55.118 "num_base_bdevs_discovered": 1, 00:15:55.118 "num_base_bdevs_operational": 3, 00:15:55.118 "base_bdevs_list": [ 00:15:55.118 { 00:15:55.118 "name": null, 00:15:55.118 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:55.118 "is_configured": false, 00:15:55.118 "data_offset": 0, 00:15:55.118 "data_size": 63488 00:15:55.118 }, 00:15:55.118 { 00:15:55.118 "name": null, 00:15:55.118 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:55.118 "is_configured": false, 00:15:55.118 "data_offset": 0, 00:15:55.118 "data_size": 63488 00:15:55.118 }, 00:15:55.118 { 00:15:55.118 "name": "BaseBdev3", 00:15:55.118 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:55.118 "is_configured": true, 00:15:55.118 "data_offset": 2048, 00:15:55.118 "data_size": 63488 00:15:55.118 } 00:15:55.118 ] 00:15:55.118 }' 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.118 09:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.377 [2024-11-06 09:08:54.322580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.377 "name": "Existed_Raid", 00:15:55.377 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:55.377 "strip_size_kb": 64, 00:15:55.377 "state": "configuring", 00:15:55.377 "raid_level": "raid0", 00:15:55.377 "superblock": true, 00:15:55.377 "num_base_bdevs": 3, 00:15:55.377 "num_base_bdevs_discovered": 2, 00:15:55.377 "num_base_bdevs_operational": 3, 00:15:55.377 "base_bdevs_list": [ 00:15:55.377 { 00:15:55.377 "name": null, 00:15:55.377 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:55.377 "is_configured": false, 00:15:55.377 "data_offset": 0, 00:15:55.377 "data_size": 63488 00:15:55.377 }, 00:15:55.377 { 00:15:55.377 "name": "BaseBdev2", 00:15:55.377 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:55.377 "is_configured": true, 00:15:55.377 "data_offset": 2048, 00:15:55.377 "data_size": 63488 00:15:55.377 }, 00:15:55.377 { 00:15:55.377 "name": "BaseBdev3", 00:15:55.377 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:55.377 "is_configured": true, 00:15:55.377 "data_offset": 2048, 00:15:55.377 "data_size": 63488 00:15:55.377 } 00:15:55.377 ] 00:15:55.377 }' 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.377 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a3c830f6-0539-47f4-b5a7-8a3dbdba06b4 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 [2024-11-06 09:08:54.845004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:55.947 [2024-11-06 09:08:54.845461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:55.947 [2024-11-06 09:08:54.845490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:55.947 [2024-11-06 09:08:54.845779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:55.947 [2024-11-06 09:08:54.845953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:55.947 [2024-11-06 09:08:54.845965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:55.947 [2024-11-06 09:08:54.846108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.947 NewBaseBdev 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 [ 00:15:55.947 { 00:15:55.947 "name": "NewBaseBdev", 00:15:55.947 "aliases": [ 00:15:55.947 "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4" 00:15:55.947 ], 00:15:55.947 "product_name": "Malloc disk", 00:15:55.947 "block_size": 512, 00:15:55.947 "num_blocks": 65536, 00:15:55.947 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:55.947 "assigned_rate_limits": { 00:15:55.947 "rw_ios_per_sec": 0, 00:15:55.947 "rw_mbytes_per_sec": 0, 00:15:55.947 "r_mbytes_per_sec": 0, 00:15:55.947 "w_mbytes_per_sec": 0 00:15:55.947 }, 00:15:55.947 "claimed": true, 00:15:55.947 "claim_type": "exclusive_write", 00:15:55.947 "zoned": false, 00:15:55.947 "supported_io_types": { 00:15:55.947 "read": true, 00:15:55.947 "write": true, 00:15:55.947 "unmap": true, 00:15:55.947 "flush": true, 00:15:55.947 "reset": true, 00:15:55.947 "nvme_admin": false, 00:15:55.947 "nvme_io": false, 00:15:55.947 "nvme_io_md": false, 00:15:55.947 "write_zeroes": true, 00:15:55.947 "zcopy": true, 00:15:55.947 "get_zone_info": false, 00:15:55.947 "zone_management": false, 00:15:55.947 "zone_append": false, 00:15:55.947 "compare": false, 00:15:55.947 "compare_and_write": false, 00:15:55.947 "abort": true, 00:15:55.947 "seek_hole": false, 00:15:55.947 "seek_data": false, 00:15:55.947 "copy": true, 00:15:55.947 "nvme_iov_md": false 00:15:55.947 }, 00:15:55.947 "memory_domains": [ 00:15:55.947 { 00:15:55.947 "dma_device_id": "system", 00:15:55.947 "dma_device_type": 1 00:15:55.947 }, 00:15:55.947 { 00:15:55.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.947 "dma_device_type": 2 00:15:55.947 } 00:15:55.947 ], 00:15:55.947 "driver_specific": {} 00:15:55.947 } 00:15:55.947 ] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.947 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.948 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.948 "name": "Existed_Raid", 00:15:55.948 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:55.948 "strip_size_kb": 64, 00:15:55.948 "state": "online", 00:15:55.948 "raid_level": "raid0", 00:15:55.948 "superblock": true, 00:15:55.948 "num_base_bdevs": 3, 00:15:55.948 "num_base_bdevs_discovered": 3, 00:15:55.948 "num_base_bdevs_operational": 3, 00:15:55.948 "base_bdevs_list": [ 00:15:55.948 { 00:15:55.948 "name": "NewBaseBdev", 00:15:55.948 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:55.948 "is_configured": true, 00:15:55.948 "data_offset": 2048, 00:15:55.948 "data_size": 63488 00:15:55.948 }, 00:15:55.948 { 00:15:55.948 "name": "BaseBdev2", 00:15:55.948 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:55.948 "is_configured": true, 00:15:55.948 "data_offset": 2048, 00:15:55.948 "data_size": 63488 00:15:55.948 }, 00:15:55.948 { 00:15:55.948 "name": "BaseBdev3", 00:15:55.948 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:55.948 "is_configured": true, 00:15:55.948 "data_offset": 2048, 00:15:55.948 "data_size": 63488 00:15:55.948 } 00:15:55.948 ] 00:15:55.948 }' 00:15:55.948 09:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.948 09:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.515 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.516 [2024-11-06 09:08:55.328697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.516 "name": "Existed_Raid", 00:15:56.516 "aliases": [ 00:15:56.516 "71c2026a-8018-45ce-8210-703ba6d8e916" 00:15:56.516 ], 00:15:56.516 "product_name": "Raid Volume", 00:15:56.516 "block_size": 512, 00:15:56.516 "num_blocks": 190464, 00:15:56.516 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:56.516 "assigned_rate_limits": { 00:15:56.516 "rw_ios_per_sec": 0, 00:15:56.516 "rw_mbytes_per_sec": 0, 00:15:56.516 "r_mbytes_per_sec": 0, 00:15:56.516 "w_mbytes_per_sec": 0 00:15:56.516 }, 00:15:56.516 "claimed": false, 00:15:56.516 "zoned": false, 00:15:56.516 "supported_io_types": { 00:15:56.516 "read": true, 00:15:56.516 "write": true, 00:15:56.516 "unmap": true, 00:15:56.516 "flush": true, 00:15:56.516 "reset": true, 00:15:56.516 "nvme_admin": false, 00:15:56.516 "nvme_io": false, 00:15:56.516 "nvme_io_md": false, 00:15:56.516 "write_zeroes": true, 00:15:56.516 "zcopy": false, 00:15:56.516 "get_zone_info": false, 00:15:56.516 "zone_management": false, 00:15:56.516 "zone_append": false, 00:15:56.516 "compare": false, 00:15:56.516 "compare_and_write": false, 00:15:56.516 "abort": false, 00:15:56.516 "seek_hole": false, 00:15:56.516 "seek_data": false, 00:15:56.516 "copy": false, 00:15:56.516 "nvme_iov_md": false 00:15:56.516 }, 00:15:56.516 "memory_domains": [ 00:15:56.516 { 00:15:56.516 "dma_device_id": "system", 00:15:56.516 "dma_device_type": 1 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.516 "dma_device_type": 2 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "dma_device_id": "system", 00:15:56.516 "dma_device_type": 1 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.516 "dma_device_type": 2 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "dma_device_id": "system", 00:15:56.516 "dma_device_type": 1 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.516 "dma_device_type": 2 00:15:56.516 } 00:15:56.516 ], 00:15:56.516 "driver_specific": { 00:15:56.516 "raid": { 00:15:56.516 "uuid": "71c2026a-8018-45ce-8210-703ba6d8e916", 00:15:56.516 "strip_size_kb": 64, 00:15:56.516 "state": "online", 00:15:56.516 "raid_level": "raid0", 00:15:56.516 "superblock": true, 00:15:56.516 "num_base_bdevs": 3, 00:15:56.516 "num_base_bdevs_discovered": 3, 00:15:56.516 "num_base_bdevs_operational": 3, 00:15:56.516 "base_bdevs_list": [ 00:15:56.516 { 00:15:56.516 "name": "NewBaseBdev", 00:15:56.516 "uuid": "a3c830f6-0539-47f4-b5a7-8a3dbdba06b4", 00:15:56.516 "is_configured": true, 00:15:56.516 "data_offset": 2048, 00:15:56.516 "data_size": 63488 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "name": "BaseBdev2", 00:15:56.516 "uuid": "c05c3e7d-fb11-443b-a31b-d929b4140bab", 00:15:56.516 "is_configured": true, 00:15:56.516 "data_offset": 2048, 00:15:56.516 "data_size": 63488 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "name": "BaseBdev3", 00:15:56.516 "uuid": "8284c55c-d52b-4854-a513-922162865928", 00:15:56.516 "is_configured": true, 00:15:56.516 "data_offset": 2048, 00:15:56.516 "data_size": 63488 00:15:56.516 } 00:15:56.516 ] 00:15:56.516 } 00:15:56.516 } 00:15:56.516 }' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:56.516 BaseBdev2 00:15:56.516 BaseBdev3' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.774 [2024-11-06 09:08:55.600165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.774 [2024-11-06 09:08:55.600197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.774 [2024-11-06 09:08:55.600300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.774 [2024-11-06 09:08:55.600356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.774 [2024-11-06 09:08:55.600370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64220 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64220 ']' 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64220 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64220 00:15:56.774 killing process with pid 64220 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64220' 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64220 00:15:56.774 [2024-11-06 09:08:55.651126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.774 09:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64220 00:15:57.033 [2024-11-06 09:08:55.963320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.410 ************************************ 00:15:58.410 END TEST raid_state_function_test_sb 00:15:58.410 ************************************ 00:15:58.410 09:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:58.410 00:15:58.410 real 0m10.590s 00:15:58.410 user 0m16.775s 00:15:58.410 sys 0m1.991s 00:15:58.410 09:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:58.410 09:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.410 09:08:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:58.410 09:08:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:58.410 09:08:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:58.410 09:08:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.410 ************************************ 00:15:58.410 START TEST raid_superblock_test 00:15:58.410 ************************************ 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64846 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64846 00:15:58.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 64846 ']' 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:58.410 09:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.410 [2024-11-06 09:08:57.312908] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:15:58.410 [2024-11-06 09:08:57.313062] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64846 ] 00:15:58.668 [2024-11-06 09:08:57.486921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.668 [2024-11-06 09:08:57.665740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.926 [2024-11-06 09:08:57.899059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.926 [2024-11-06 09:08:57.899130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.184 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.442 malloc1 00:15:59.442 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.442 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.442 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 [2024-11-06 09:08:58.244873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.443 [2024-11-06 09:08:58.244950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.443 [2024-11-06 09:08:58.244980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.443 [2024-11-06 09:08:58.244994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.443 [2024-11-06 09:08:58.247595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.443 [2024-11-06 09:08:58.247637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.443 pt1 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 malloc2 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 [2024-11-06 09:08:58.305334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.443 [2024-11-06 09:08:58.305539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.443 [2024-11-06 09:08:58.305626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.443 [2024-11-06 09:08:58.305719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.443 [2024-11-06 09:08:58.308609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.443 [2024-11-06 09:08:58.308769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.443 pt2 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 malloc3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 [2024-11-06 09:08:58.376515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.443 [2024-11-06 09:08:58.376701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.443 [2024-11-06 09:08:58.376766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:59.443 [2024-11-06 09:08:58.376782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.443 [2024-11-06 09:08:58.379387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.443 [2024-11-06 09:08:58.379429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.443 pt3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 [2024-11-06 09:08:58.388561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.443 [2024-11-06 09:08:58.390810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.443 [2024-11-06 09:08:58.391020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.443 [2024-11-06 09:08:58.391204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.443 [2024-11-06 09:08:58.391221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:59.443 [2024-11-06 09:08:58.391538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:59.443 [2024-11-06 09:08:58.391713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.443 [2024-11-06 09:08:58.391725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.443 [2024-11-06 09:08:58.391900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.443 "name": "raid_bdev1", 00:15:59.443 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:15:59.443 "strip_size_kb": 64, 00:15:59.443 "state": "online", 00:15:59.443 "raid_level": "raid0", 00:15:59.443 "superblock": true, 00:15:59.443 "num_base_bdevs": 3, 00:15:59.443 "num_base_bdevs_discovered": 3, 00:15:59.443 "num_base_bdevs_operational": 3, 00:15:59.443 "base_bdevs_list": [ 00:15:59.443 { 00:15:59.443 "name": "pt1", 00:15:59.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.443 "is_configured": true, 00:15:59.443 "data_offset": 2048, 00:15:59.443 "data_size": 63488 00:15:59.443 }, 00:15:59.443 { 00:15:59.443 "name": "pt2", 00:15:59.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.443 "is_configured": true, 00:15:59.443 "data_offset": 2048, 00:15:59.443 "data_size": 63488 00:15:59.443 }, 00:15:59.443 { 00:15:59.443 "name": "pt3", 00:15:59.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.443 "is_configured": true, 00:15:59.443 "data_offset": 2048, 00:15:59.443 "data_size": 63488 00:15:59.443 } 00:15:59.443 ] 00:15:59.443 }' 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.443 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.033 [2024-11-06 09:08:58.828434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.033 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.033 "name": "raid_bdev1", 00:16:00.033 "aliases": [ 00:16:00.033 "2a317468-dfaf-4067-8e8e-a1a347b409b8" 00:16:00.033 ], 00:16:00.033 "product_name": "Raid Volume", 00:16:00.033 "block_size": 512, 00:16:00.033 "num_blocks": 190464, 00:16:00.033 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:00.033 "assigned_rate_limits": { 00:16:00.033 "rw_ios_per_sec": 0, 00:16:00.033 "rw_mbytes_per_sec": 0, 00:16:00.033 "r_mbytes_per_sec": 0, 00:16:00.033 "w_mbytes_per_sec": 0 00:16:00.033 }, 00:16:00.033 "claimed": false, 00:16:00.033 "zoned": false, 00:16:00.033 "supported_io_types": { 00:16:00.033 "read": true, 00:16:00.033 "write": true, 00:16:00.033 "unmap": true, 00:16:00.033 "flush": true, 00:16:00.033 "reset": true, 00:16:00.033 "nvme_admin": false, 00:16:00.033 "nvme_io": false, 00:16:00.033 "nvme_io_md": false, 00:16:00.033 "write_zeroes": true, 00:16:00.033 "zcopy": false, 00:16:00.033 "get_zone_info": false, 00:16:00.033 "zone_management": false, 00:16:00.033 "zone_append": false, 00:16:00.033 "compare": false, 00:16:00.033 "compare_and_write": false, 00:16:00.033 "abort": false, 00:16:00.033 "seek_hole": false, 00:16:00.033 "seek_data": false, 00:16:00.033 "copy": false, 00:16:00.033 "nvme_iov_md": false 00:16:00.033 }, 00:16:00.033 "memory_domains": [ 00:16:00.033 { 00:16:00.033 "dma_device_id": "system", 00:16:00.033 "dma_device_type": 1 00:16:00.033 }, 00:16:00.033 { 00:16:00.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.034 "dma_device_type": 2 00:16:00.034 }, 00:16:00.034 { 00:16:00.034 "dma_device_id": "system", 00:16:00.034 "dma_device_type": 1 00:16:00.034 }, 00:16:00.034 { 00:16:00.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.034 "dma_device_type": 2 00:16:00.034 }, 00:16:00.034 { 00:16:00.034 "dma_device_id": "system", 00:16:00.034 "dma_device_type": 1 00:16:00.034 }, 00:16:00.034 { 00:16:00.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.034 "dma_device_type": 2 00:16:00.034 } 00:16:00.034 ], 00:16:00.034 "driver_specific": { 00:16:00.034 "raid": { 00:16:00.034 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:00.034 "strip_size_kb": 64, 00:16:00.034 "state": "online", 00:16:00.034 "raid_level": "raid0", 00:16:00.034 "superblock": true, 00:16:00.034 "num_base_bdevs": 3, 00:16:00.034 "num_base_bdevs_discovered": 3, 00:16:00.034 "num_base_bdevs_operational": 3, 00:16:00.034 "base_bdevs_list": [ 00:16:00.034 { 00:16:00.034 "name": "pt1", 00:16:00.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.034 "is_configured": true, 00:16:00.034 "data_offset": 2048, 00:16:00.034 "data_size": 63488 00:16:00.034 }, 00:16:00.034 { 00:16:00.034 "name": "pt2", 00:16:00.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.034 "is_configured": true, 00:16:00.034 "data_offset": 2048, 00:16:00.034 "data_size": 63488 00:16:00.034 }, 00:16:00.034 { 00:16:00.034 "name": "pt3", 00:16:00.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.034 "is_configured": true, 00:16:00.034 "data_offset": 2048, 00:16:00.034 "data_size": 63488 00:16:00.034 } 00:16:00.034 ] 00:16:00.034 } 00:16:00.034 } 00:16:00.034 }' 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:00.034 pt2 00:16:00.034 pt3' 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.034 09:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.291 [2024-11-06 09:08:59.115995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2a317468-dfaf-4067-8e8e-a1a347b409b8 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2a317468-dfaf-4067-8e8e-a1a347b409b8 ']' 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.291 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.291 [2024-11-06 09:08:59.143668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.291 [2024-11-06 09:08:59.143707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.292 [2024-11-06 09:08:59.143798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.292 [2024-11-06 09:08:59.143865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.292 [2024-11-06 09:08:59.143877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 [2024-11-06 09:08:59.283531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:00.292 [2024-11-06 09:08:59.285832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:00.292 [2024-11-06 09:08:59.285893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:00.292 [2024-11-06 09:08:59.285952] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:00.292 [2024-11-06 09:08:59.286013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:00.292 [2024-11-06 09:08:59.286037] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:00.292 [2024-11-06 09:08:59.286060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.292 [2024-11-06 09:08:59.286075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:00.292 request: 00:16:00.292 { 00:16:00.292 "name": "raid_bdev1", 00:16:00.292 "raid_level": "raid0", 00:16:00.292 "base_bdevs": [ 00:16:00.292 "malloc1", 00:16:00.292 "malloc2", 00:16:00.292 "malloc3" 00:16:00.292 ], 00:16:00.292 "strip_size_kb": 64, 00:16:00.292 "superblock": false, 00:16:00.292 "method": "bdev_raid_create", 00:16:00.292 "req_id": 1 00:16:00.292 } 00:16:00.292 Got JSON-RPC error response 00:16:00.292 response: 00:16:00.292 { 00:16:00.292 "code": -17, 00:16:00.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:00.292 } 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:00.292 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.551 [2024-11-06 09:08:59.351393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:00.551 [2024-11-06 09:08:59.351470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.551 [2024-11-06 09:08:59.351495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:00.551 [2024-11-06 09:08:59.351508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.551 [2024-11-06 09:08:59.354230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.551 [2024-11-06 09:08:59.354298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:00.551 [2024-11-06 09:08:59.354410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:00.551 [2024-11-06 09:08:59.354481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:00.551 pt1 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.551 "name": "raid_bdev1", 00:16:00.551 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:00.551 "strip_size_kb": 64, 00:16:00.551 "state": "configuring", 00:16:00.551 "raid_level": "raid0", 00:16:00.551 "superblock": true, 00:16:00.551 "num_base_bdevs": 3, 00:16:00.551 "num_base_bdevs_discovered": 1, 00:16:00.551 "num_base_bdevs_operational": 3, 00:16:00.551 "base_bdevs_list": [ 00:16:00.551 { 00:16:00.551 "name": "pt1", 00:16:00.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.551 "is_configured": true, 00:16:00.551 "data_offset": 2048, 00:16:00.551 "data_size": 63488 00:16:00.551 }, 00:16:00.551 { 00:16:00.551 "name": null, 00:16:00.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.551 "is_configured": false, 00:16:00.551 "data_offset": 2048, 00:16:00.551 "data_size": 63488 00:16:00.551 }, 00:16:00.551 { 00:16:00.551 "name": null, 00:16:00.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.551 "is_configured": false, 00:16:00.551 "data_offset": 2048, 00:16:00.551 "data_size": 63488 00:16:00.551 } 00:16:00.551 ] 00:16:00.551 }' 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.551 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.808 [2024-11-06 09:08:59.782786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.808 [2024-11-06 09:08:59.782991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.808 [2024-11-06 09:08:59.783070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:00.808 [2024-11-06 09:08:59.783152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.808 [2024-11-06 09:08:59.783656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.808 [2024-11-06 09:08:59.783678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.808 [2024-11-06 09:08:59.783790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.808 [2024-11-06 09:08:59.783814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.808 pt2 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.808 [2024-11-06 09:08:59.794764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:00.808 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.809 "name": "raid_bdev1", 00:16:00.809 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:00.809 "strip_size_kb": 64, 00:16:00.809 "state": "configuring", 00:16:00.809 "raid_level": "raid0", 00:16:00.809 "superblock": true, 00:16:00.809 "num_base_bdevs": 3, 00:16:00.809 "num_base_bdevs_discovered": 1, 00:16:00.809 "num_base_bdevs_operational": 3, 00:16:00.809 "base_bdevs_list": [ 00:16:00.809 { 00:16:00.809 "name": "pt1", 00:16:00.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.809 "is_configured": true, 00:16:00.809 "data_offset": 2048, 00:16:00.809 "data_size": 63488 00:16:00.809 }, 00:16:00.809 { 00:16:00.809 "name": null, 00:16:00.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.809 "is_configured": false, 00:16:00.809 "data_offset": 0, 00:16:00.809 "data_size": 63488 00:16:00.809 }, 00:16:00.809 { 00:16:00.809 "name": null, 00:16:00.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.809 "is_configured": false, 00:16:00.809 "data_offset": 2048, 00:16:00.809 "data_size": 63488 00:16:00.809 } 00:16:00.809 ] 00:16:00.809 }' 00:16:00.809 09:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.065 09:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.323 [2024-11-06 09:09:00.258263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.323 [2024-11-06 09:09:00.258494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.323 [2024-11-06 09:09:00.258526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:01.323 [2024-11-06 09:09:00.258542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.323 [2024-11-06 09:09:00.259075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.323 [2024-11-06 09:09:00.259107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.323 [2024-11-06 09:09:00.259200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.323 [2024-11-06 09:09:00.259226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.323 pt2 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.323 [2024-11-06 09:09:00.270227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:01.323 [2024-11-06 09:09:00.270419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.323 [2024-11-06 09:09:00.270448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:01.323 [2024-11-06 09:09:00.270464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.323 [2024-11-06 09:09:00.270941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.323 [2024-11-06 09:09:00.270967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:01.323 [2024-11-06 09:09:00.271045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:01.323 [2024-11-06 09:09:00.271070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.323 [2024-11-06 09:09:00.271196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:01.323 [2024-11-06 09:09:00.271210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:01.323 [2024-11-06 09:09:00.271506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:01.323 [2024-11-06 09:09:00.271656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:01.323 [2024-11-06 09:09:00.271673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:01.323 [2024-11-06 09:09:00.271838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.323 pt3 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.323 "name": "raid_bdev1", 00:16:01.323 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:01.323 "strip_size_kb": 64, 00:16:01.323 "state": "online", 00:16:01.323 "raid_level": "raid0", 00:16:01.323 "superblock": true, 00:16:01.323 "num_base_bdevs": 3, 00:16:01.323 "num_base_bdevs_discovered": 3, 00:16:01.323 "num_base_bdevs_operational": 3, 00:16:01.323 "base_bdevs_list": [ 00:16:01.323 { 00:16:01.323 "name": "pt1", 00:16:01.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.323 "is_configured": true, 00:16:01.323 "data_offset": 2048, 00:16:01.323 "data_size": 63488 00:16:01.323 }, 00:16:01.323 { 00:16:01.323 "name": "pt2", 00:16:01.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.323 "is_configured": true, 00:16:01.323 "data_offset": 2048, 00:16:01.323 "data_size": 63488 00:16:01.323 }, 00:16:01.323 { 00:16:01.323 "name": "pt3", 00:16:01.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.323 "is_configured": true, 00:16:01.323 "data_offset": 2048, 00:16:01.323 "data_size": 63488 00:16:01.323 } 00:16:01.323 ] 00:16:01.323 }' 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.323 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.891 [2024-11-06 09:09:00.678061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.891 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:01.891 "name": "raid_bdev1", 00:16:01.891 "aliases": [ 00:16:01.891 "2a317468-dfaf-4067-8e8e-a1a347b409b8" 00:16:01.891 ], 00:16:01.891 "product_name": "Raid Volume", 00:16:01.891 "block_size": 512, 00:16:01.891 "num_blocks": 190464, 00:16:01.891 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:01.891 "assigned_rate_limits": { 00:16:01.891 "rw_ios_per_sec": 0, 00:16:01.891 "rw_mbytes_per_sec": 0, 00:16:01.891 "r_mbytes_per_sec": 0, 00:16:01.891 "w_mbytes_per_sec": 0 00:16:01.891 }, 00:16:01.891 "claimed": false, 00:16:01.891 "zoned": false, 00:16:01.891 "supported_io_types": { 00:16:01.891 "read": true, 00:16:01.891 "write": true, 00:16:01.891 "unmap": true, 00:16:01.891 "flush": true, 00:16:01.891 "reset": true, 00:16:01.891 "nvme_admin": false, 00:16:01.891 "nvme_io": false, 00:16:01.891 "nvme_io_md": false, 00:16:01.891 "write_zeroes": true, 00:16:01.891 "zcopy": false, 00:16:01.891 "get_zone_info": false, 00:16:01.892 "zone_management": false, 00:16:01.892 "zone_append": false, 00:16:01.892 "compare": false, 00:16:01.892 "compare_and_write": false, 00:16:01.892 "abort": false, 00:16:01.892 "seek_hole": false, 00:16:01.892 "seek_data": false, 00:16:01.892 "copy": false, 00:16:01.892 "nvme_iov_md": false 00:16:01.892 }, 00:16:01.892 "memory_domains": [ 00:16:01.892 { 00:16:01.892 "dma_device_id": "system", 00:16:01.892 "dma_device_type": 1 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.892 "dma_device_type": 2 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "dma_device_id": "system", 00:16:01.892 "dma_device_type": 1 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.892 "dma_device_type": 2 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "dma_device_id": "system", 00:16:01.892 "dma_device_type": 1 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.892 "dma_device_type": 2 00:16:01.892 } 00:16:01.892 ], 00:16:01.892 "driver_specific": { 00:16:01.892 "raid": { 00:16:01.892 "uuid": "2a317468-dfaf-4067-8e8e-a1a347b409b8", 00:16:01.892 "strip_size_kb": 64, 00:16:01.892 "state": "online", 00:16:01.892 "raid_level": "raid0", 00:16:01.892 "superblock": true, 00:16:01.892 "num_base_bdevs": 3, 00:16:01.892 "num_base_bdevs_discovered": 3, 00:16:01.892 "num_base_bdevs_operational": 3, 00:16:01.892 "base_bdevs_list": [ 00:16:01.892 { 00:16:01.892 "name": "pt1", 00:16:01.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.892 "is_configured": true, 00:16:01.892 "data_offset": 2048, 00:16:01.892 "data_size": 63488 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "name": "pt2", 00:16:01.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.892 "is_configured": true, 00:16:01.892 "data_offset": 2048, 00:16:01.892 "data_size": 63488 00:16:01.892 }, 00:16:01.892 { 00:16:01.892 "name": "pt3", 00:16:01.892 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.892 "is_configured": true, 00:16:01.892 "data_offset": 2048, 00:16:01.892 "data_size": 63488 00:16:01.892 } 00:16:01.892 ] 00:16:01.892 } 00:16:01.892 } 00:16:01.892 }' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:01.892 pt2 00:16:01.892 pt3' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.892 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.150 09:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:02.150 [2024-11-06 09:09:00.966000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2a317468-dfaf-4067-8e8e-a1a347b409b8 '!=' 2a317468-dfaf-4067-8e8e-a1a347b409b8 ']' 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64846 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 64846 ']' 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 64846 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64846 00:16:02.150 killing process with pid 64846 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:02.150 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64846' 00:16:02.151 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 64846 00:16:02.151 [2024-11-06 09:09:01.040229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.151 [2024-11-06 09:09:01.040358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.151 09:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 64846 00:16:02.151 [2024-11-06 09:09:01.040425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.151 [2024-11-06 09:09:01.040440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:02.408 [2024-11-06 09:09:01.370225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.797 09:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:03.797 00:16:03.797 real 0m5.353s 00:16:03.797 user 0m7.607s 00:16:03.797 sys 0m1.028s 00:16:03.797 09:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:03.797 09:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.797 ************************************ 00:16:03.797 END TEST raid_superblock_test 00:16:03.797 ************************************ 00:16:03.797 09:09:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:16:03.797 09:09:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:03.797 09:09:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:03.797 09:09:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.797 ************************************ 00:16:03.797 START TEST raid_read_error_test 00:16:03.797 ************************************ 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.raQZrXkyl5 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65099 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65099 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65099 ']' 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:03.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:03.797 09:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.797 [2024-11-06 09:09:02.741638] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:03.797 [2024-11-06 09:09:02.741813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65099 ] 00:16:04.055 [2024-11-06 09:09:02.925763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.055 [2024-11-06 09:09:03.056581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.315 [2024-11-06 09:09:03.290347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.315 [2024-11-06 09:09:03.290425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 BaseBdev1_malloc 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 true 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 [2024-11-06 09:09:03.687399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:04.882 [2024-11-06 09:09:03.687476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.882 [2024-11-06 09:09:03.687505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:04.882 [2024-11-06 09:09:03.687522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.882 [2024-11-06 09:09:03.690234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.882 [2024-11-06 09:09:03.690291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.882 BaseBdev1 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 BaseBdev2_malloc 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 true 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.882 [2024-11-06 09:09:03.760474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:04.882 [2024-11-06 09:09:03.760549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.882 [2024-11-06 09:09:03.760573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:04.882 [2024-11-06 09:09:03.760588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.882 [2024-11-06 09:09:03.763532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.882 [2024-11-06 09:09:03.763587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.882 BaseBdev2 00:16:04.882 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.883 BaseBdev3_malloc 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.883 true 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.883 [2024-11-06 09:09:03.842571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:04.883 [2024-11-06 09:09:03.842638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.883 [2024-11-06 09:09:03.842663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:04.883 [2024-11-06 09:09:03.842689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.883 [2024-11-06 09:09:03.845514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.883 [2024-11-06 09:09:03.845568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:04.883 BaseBdev3 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.883 [2024-11-06 09:09:03.854645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.883 [2024-11-06 09:09:03.857027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.883 [2024-11-06 09:09:03.857126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.883 [2024-11-06 09:09:03.857370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:04.883 [2024-11-06 09:09:03.857389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.883 [2024-11-06 09:09:03.857722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:04.883 [2024-11-06 09:09:03.857910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:04.883 [2024-11-06 09:09:03.857928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:04.883 [2024-11-06 09:09:03.858122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.883 "name": "raid_bdev1", 00:16:04.883 "uuid": "30541444-a514-4839-a605-f324e601b804", 00:16:04.883 "strip_size_kb": 64, 00:16:04.883 "state": "online", 00:16:04.883 "raid_level": "raid0", 00:16:04.883 "superblock": true, 00:16:04.883 "num_base_bdevs": 3, 00:16:04.883 "num_base_bdevs_discovered": 3, 00:16:04.883 "num_base_bdevs_operational": 3, 00:16:04.883 "base_bdevs_list": [ 00:16:04.883 { 00:16:04.883 "name": "BaseBdev1", 00:16:04.883 "uuid": "32298e59-5bf2-5071-8b26-2c9f84b0beb9", 00:16:04.883 "is_configured": true, 00:16:04.883 "data_offset": 2048, 00:16:04.883 "data_size": 63488 00:16:04.883 }, 00:16:04.883 { 00:16:04.883 "name": "BaseBdev2", 00:16:04.883 "uuid": "7b406443-4370-5f89-b629-229ce629c59f", 00:16:04.883 "is_configured": true, 00:16:04.883 "data_offset": 2048, 00:16:04.883 "data_size": 63488 00:16:04.883 }, 00:16:04.883 { 00:16:04.883 "name": "BaseBdev3", 00:16:04.883 "uuid": "f8e79dad-1001-5998-b1ae-8d1ba0b04c7e", 00:16:04.883 "is_configured": true, 00:16:04.883 "data_offset": 2048, 00:16:04.883 "data_size": 63488 00:16:04.883 } 00:16:04.883 ] 00:16:04.883 }' 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.883 09:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.451 09:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:05.451 09:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:05.451 [2024-11-06 09:09:04.443401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.387 "name": "raid_bdev1", 00:16:06.387 "uuid": "30541444-a514-4839-a605-f324e601b804", 00:16:06.387 "strip_size_kb": 64, 00:16:06.387 "state": "online", 00:16:06.387 "raid_level": "raid0", 00:16:06.387 "superblock": true, 00:16:06.387 "num_base_bdevs": 3, 00:16:06.387 "num_base_bdevs_discovered": 3, 00:16:06.387 "num_base_bdevs_operational": 3, 00:16:06.387 "base_bdevs_list": [ 00:16:06.387 { 00:16:06.387 "name": "BaseBdev1", 00:16:06.387 "uuid": "32298e59-5bf2-5071-8b26-2c9f84b0beb9", 00:16:06.387 "is_configured": true, 00:16:06.387 "data_offset": 2048, 00:16:06.387 "data_size": 63488 00:16:06.387 }, 00:16:06.387 { 00:16:06.387 "name": "BaseBdev2", 00:16:06.387 "uuid": "7b406443-4370-5f89-b629-229ce629c59f", 00:16:06.387 "is_configured": true, 00:16:06.387 "data_offset": 2048, 00:16:06.387 "data_size": 63488 00:16:06.387 }, 00:16:06.387 { 00:16:06.387 "name": "BaseBdev3", 00:16:06.387 "uuid": "f8e79dad-1001-5998-b1ae-8d1ba0b04c7e", 00:16:06.387 "is_configured": true, 00:16:06.387 "data_offset": 2048, 00:16:06.387 "data_size": 63488 00:16:06.387 } 00:16:06.387 ] 00:16:06.387 }' 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.387 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.957 [2024-11-06 09:09:05.804500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.957 [2024-11-06 09:09:05.804539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.957 [2024-11-06 09:09:05.807390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.957 [2024-11-06 09:09:05.807445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.957 [2024-11-06 09:09:05.807487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.957 [2024-11-06 09:09:05.807499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:06.957 { 00:16:06.957 "results": [ 00:16:06.957 { 00:16:06.957 "job": "raid_bdev1", 00:16:06.957 "core_mask": "0x1", 00:16:06.957 "workload": "randrw", 00:16:06.957 "percentage": 50, 00:16:06.957 "status": "finished", 00:16:06.957 "queue_depth": 1, 00:16:06.957 "io_size": 131072, 00:16:06.957 "runtime": 1.36105, 00:16:06.957 "iops": 14393.299290988574, 00:16:06.957 "mibps": 1799.1624113735718, 00:16:06.957 "io_failed": 1, 00:16:06.957 "io_timeout": 0, 00:16:06.957 "avg_latency_us": 96.5152645495975, 00:16:06.957 "min_latency_us": 22.51566265060241, 00:16:06.957 "max_latency_us": 1559.4409638554216 00:16:06.957 } 00:16:06.957 ], 00:16:06.957 "core_count": 1 00:16:06.957 } 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65099 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65099 ']' 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65099 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65099 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:06.957 killing process with pid 65099 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65099' 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65099 00:16:06.957 [2024-11-06 09:09:05.859979] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.957 09:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65099 00:16:07.215 [2024-11-06 09:09:06.113635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.raQZrXkyl5 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:16:08.593 00:16:08.593 real 0m4.776s 00:16:08.593 user 0m5.702s 00:16:08.593 sys 0m0.638s 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:08.593 09:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.593 ************************************ 00:16:08.593 END TEST raid_read_error_test 00:16:08.593 ************************************ 00:16:08.593 09:09:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:16:08.593 09:09:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:08.593 09:09:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:08.593 09:09:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.593 ************************************ 00:16:08.593 START TEST raid_write_error_test 00:16:08.593 ************************************ 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iX4h2RgHlw 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65245 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65245 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65245 ']' 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:08.593 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.594 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:08.594 09:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.594 [2024-11-06 09:09:07.605303] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:08.594 [2024-11-06 09:09:07.605485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65245 ] 00:16:08.852 [2024-11-06 09:09:07.788002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.111 [2024-11-06 09:09:07.917095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.111 [2024-11-06 09:09:08.140361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.111 [2024-11-06 09:09:08.140438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 BaseBdev1_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 true 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 [2024-11-06 09:09:08.557873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:09.678 [2024-11-06 09:09:08.558086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.678 [2024-11-06 09:09:08.558123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:09.678 [2024-11-06 09:09:08.558139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.678 [2024-11-06 09:09:08.560865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.678 [2024-11-06 09:09:08.560916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.678 BaseBdev1 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 BaseBdev2_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 true 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 [2024-11-06 09:09:08.630404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:09.678 [2024-11-06 09:09:08.630466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.678 [2024-11-06 09:09:08.630487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:09.678 [2024-11-06 09:09:08.630502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.678 [2024-11-06 09:09:08.632959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.678 [2024-11-06 09:09:08.633123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.678 BaseBdev2 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 BaseBdev3_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 true 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.679 [2024-11-06 09:09:08.705314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:09.679 [2024-11-06 09:09:08.705377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.679 [2024-11-06 09:09:08.705399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:09.679 [2024-11-06 09:09:08.705414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.679 [2024-11-06 09:09:08.708062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.679 [2024-11-06 09:09:08.708108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.679 BaseBdev3 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.679 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.679 [2024-11-06 09:09:08.713397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.679 [2024-11-06 09:09:08.715783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.679 [2024-11-06 09:09:08.715989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.679 [2024-11-06 09:09:08.716317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:09.679 [2024-11-06 09:09:08.716430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.679 [2024-11-06 09:09:08.716780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:09.679 [2024-11-06 09:09:08.717063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:09.937 [2024-11-06 09:09:08.717175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:09.937 [2024-11-06 09:09:08.717518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.937 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.937 "name": "raid_bdev1", 00:16:09.937 "uuid": "2ae76caa-8290-4ba8-aadc-794d31139637", 00:16:09.937 "strip_size_kb": 64, 00:16:09.937 "state": "online", 00:16:09.937 "raid_level": "raid0", 00:16:09.937 "superblock": true, 00:16:09.937 "num_base_bdevs": 3, 00:16:09.937 "num_base_bdevs_discovered": 3, 00:16:09.937 "num_base_bdevs_operational": 3, 00:16:09.937 "base_bdevs_list": [ 00:16:09.937 { 00:16:09.937 "name": "BaseBdev1", 00:16:09.937 "uuid": "24a5695b-5947-5636-b44b-be8c5bd6052a", 00:16:09.937 "is_configured": true, 00:16:09.937 "data_offset": 2048, 00:16:09.937 "data_size": 63488 00:16:09.937 }, 00:16:09.937 { 00:16:09.937 "name": "BaseBdev2", 00:16:09.937 "uuid": "a4db1ee5-c65b-5e75-b103-cbd36bf49f16", 00:16:09.937 "is_configured": true, 00:16:09.937 "data_offset": 2048, 00:16:09.937 "data_size": 63488 00:16:09.937 }, 00:16:09.937 { 00:16:09.937 "name": "BaseBdev3", 00:16:09.937 "uuid": "2c04c321-0edd-56f2-bfa8-f97c059675be", 00:16:09.937 "is_configured": true, 00:16:09.937 "data_offset": 2048, 00:16:09.937 "data_size": 63488 00:16:09.938 } 00:16:09.938 ] 00:16:09.938 }' 00:16:09.938 09:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.938 09:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.196 09:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:10.196 09:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:10.456 [2024-11-06 09:09:09.290255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.393 "name": "raid_bdev1", 00:16:11.393 "uuid": "2ae76caa-8290-4ba8-aadc-794d31139637", 00:16:11.393 "strip_size_kb": 64, 00:16:11.393 "state": "online", 00:16:11.393 "raid_level": "raid0", 00:16:11.393 "superblock": true, 00:16:11.393 "num_base_bdevs": 3, 00:16:11.393 "num_base_bdevs_discovered": 3, 00:16:11.393 "num_base_bdevs_operational": 3, 00:16:11.393 "base_bdevs_list": [ 00:16:11.393 { 00:16:11.393 "name": "BaseBdev1", 00:16:11.393 "uuid": "24a5695b-5947-5636-b44b-be8c5bd6052a", 00:16:11.393 "is_configured": true, 00:16:11.393 "data_offset": 2048, 00:16:11.393 "data_size": 63488 00:16:11.393 }, 00:16:11.393 { 00:16:11.393 "name": "BaseBdev2", 00:16:11.393 "uuid": "a4db1ee5-c65b-5e75-b103-cbd36bf49f16", 00:16:11.393 "is_configured": true, 00:16:11.393 "data_offset": 2048, 00:16:11.393 "data_size": 63488 00:16:11.393 }, 00:16:11.393 { 00:16:11.393 "name": "BaseBdev3", 00:16:11.393 "uuid": "2c04c321-0edd-56f2-bfa8-f97c059675be", 00:16:11.393 "is_configured": true, 00:16:11.393 "data_offset": 2048, 00:16:11.393 "data_size": 63488 00:16:11.393 } 00:16:11.393 ] 00:16:11.393 }' 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.393 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.651 [2024-11-06 09:09:10.645657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.651 [2024-11-06 09:09:10.645690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.651 [2024-11-06 09:09:10.648758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.651 [2024-11-06 09:09:10.648938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.651 [2024-11-06 09:09:10.649036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.651 [2024-11-06 09:09:10.649157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:11.651 { 00:16:11.651 "results": [ 00:16:11.651 { 00:16:11.651 "job": "raid_bdev1", 00:16:11.651 "core_mask": "0x1", 00:16:11.651 "workload": "randrw", 00:16:11.651 "percentage": 50, 00:16:11.651 "status": "finished", 00:16:11.651 "queue_depth": 1, 00:16:11.651 "io_size": 131072, 00:16:11.651 "runtime": 1.355165, 00:16:11.651 "iops": 15487.412971852136, 00:16:11.651 "mibps": 1935.926621481517, 00:16:11.651 "io_failed": 1, 00:16:11.651 "io_timeout": 0, 00:16:11.651 "avg_latency_us": 89.44315762262924, 00:16:11.651 "min_latency_us": 26.936546184738955, 00:16:11.651 "max_latency_us": 1487.0618473895581 00:16:11.651 } 00:16:11.651 ], 00:16:11.651 "core_count": 1 00:16:11.651 } 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65245 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65245 ']' 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65245 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.651 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65245 00:16:11.934 killing process with pid 65245 00:16:11.934 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.934 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.934 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65245' 00:16:11.934 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65245 00:16:11.934 [2024-11-06 09:09:10.722698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.934 09:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65245 00:16:12.195 [2024-11-06 09:09:10.976937] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iX4h2RgHlw 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:16:13.575 00:16:13.575 real 0m4.787s 00:16:13.575 user 0m5.667s 00:16:13.575 sys 0m0.679s 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.575 09:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 ************************************ 00:16:13.575 END TEST raid_write_error_test 00:16:13.575 ************************************ 00:16:13.575 09:09:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:13.575 09:09:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:13.575 09:09:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:13.575 09:09:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.575 09:09:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.575 ************************************ 00:16:13.575 START TEST raid_state_function_test 00:16:13.575 ************************************ 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:13.575 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65389 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:13.576 Process raid pid: 65389 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65389' 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65389 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65389 ']' 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:13.576 09:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.576 [2024-11-06 09:09:12.466267] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:13.576 [2024-11-06 09:09:12.466408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.834 [2024-11-06 09:09:12.655216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.834 [2024-11-06 09:09:12.794009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.093 [2024-11-06 09:09:13.028140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.093 [2024-11-06 09:09:13.028185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.352 [2024-11-06 09:09:13.359464] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.352 [2024-11-06 09:09:13.359549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.352 [2024-11-06 09:09:13.359563] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.352 [2024-11-06 09:09:13.359578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.352 [2024-11-06 09:09:13.359586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.352 [2024-11-06 09:09:13.359600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.352 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.353 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.353 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.353 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.353 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.353 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.611 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.611 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.611 "name": "Existed_Raid", 00:16:14.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.611 "strip_size_kb": 64, 00:16:14.611 "state": "configuring", 00:16:14.611 "raid_level": "concat", 00:16:14.611 "superblock": false, 00:16:14.611 "num_base_bdevs": 3, 00:16:14.611 "num_base_bdevs_discovered": 0, 00:16:14.611 "num_base_bdevs_operational": 3, 00:16:14.611 "base_bdevs_list": [ 00:16:14.611 { 00:16:14.611 "name": "BaseBdev1", 00:16:14.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.611 "is_configured": false, 00:16:14.611 "data_offset": 0, 00:16:14.611 "data_size": 0 00:16:14.611 }, 00:16:14.611 { 00:16:14.611 "name": "BaseBdev2", 00:16:14.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.611 "is_configured": false, 00:16:14.611 "data_offset": 0, 00:16:14.611 "data_size": 0 00:16:14.611 }, 00:16:14.611 { 00:16:14.611 "name": "BaseBdev3", 00:16:14.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.611 "is_configured": false, 00:16:14.611 "data_offset": 0, 00:16:14.611 "data_size": 0 00:16:14.611 } 00:16:14.611 ] 00:16:14.611 }' 00:16:14.611 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.611 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.870 [2024-11-06 09:09:13.850851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.870 [2024-11-06 09:09:13.850895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.870 [2024-11-06 09:09:13.862741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.870 [2024-11-06 09:09:13.862935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.870 [2024-11-06 09:09:13.862961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.870 [2024-11-06 09:09:13.862977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.870 [2024-11-06 09:09:13.862985] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.870 [2024-11-06 09:09:13.862999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.870 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.135 [2024-11-06 09:09:13.916160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.135 BaseBdev1 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.135 [ 00:16:15.135 { 00:16:15.135 "name": "BaseBdev1", 00:16:15.135 "aliases": [ 00:16:15.135 "f27ade32-7011-4c76-a781-f4be74dbe74d" 00:16:15.135 ], 00:16:15.135 "product_name": "Malloc disk", 00:16:15.135 "block_size": 512, 00:16:15.135 "num_blocks": 65536, 00:16:15.135 "uuid": "f27ade32-7011-4c76-a781-f4be74dbe74d", 00:16:15.135 "assigned_rate_limits": { 00:16:15.135 "rw_ios_per_sec": 0, 00:16:15.135 "rw_mbytes_per_sec": 0, 00:16:15.135 "r_mbytes_per_sec": 0, 00:16:15.135 "w_mbytes_per_sec": 0 00:16:15.135 }, 00:16:15.135 "claimed": true, 00:16:15.135 "claim_type": "exclusive_write", 00:16:15.135 "zoned": false, 00:16:15.135 "supported_io_types": { 00:16:15.135 "read": true, 00:16:15.135 "write": true, 00:16:15.135 "unmap": true, 00:16:15.135 "flush": true, 00:16:15.135 "reset": true, 00:16:15.135 "nvme_admin": false, 00:16:15.135 "nvme_io": false, 00:16:15.135 "nvme_io_md": false, 00:16:15.135 "write_zeroes": true, 00:16:15.135 "zcopy": true, 00:16:15.135 "get_zone_info": false, 00:16:15.135 "zone_management": false, 00:16:15.135 "zone_append": false, 00:16:15.135 "compare": false, 00:16:15.135 "compare_and_write": false, 00:16:15.135 "abort": true, 00:16:15.135 "seek_hole": false, 00:16:15.135 "seek_data": false, 00:16:15.135 "copy": true, 00:16:15.135 "nvme_iov_md": false 00:16:15.135 }, 00:16:15.135 "memory_domains": [ 00:16:15.135 { 00:16:15.135 "dma_device_id": "system", 00:16:15.135 "dma_device_type": 1 00:16:15.135 }, 00:16:15.135 { 00:16:15.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.135 "dma_device_type": 2 00:16:15.135 } 00:16:15.135 ], 00:16:15.135 "driver_specific": {} 00:16:15.135 } 00:16:15.135 ] 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.135 09:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.135 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.135 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.135 "name": "Existed_Raid", 00:16:15.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.135 "strip_size_kb": 64, 00:16:15.135 "state": "configuring", 00:16:15.135 "raid_level": "concat", 00:16:15.135 "superblock": false, 00:16:15.135 "num_base_bdevs": 3, 00:16:15.135 "num_base_bdevs_discovered": 1, 00:16:15.135 "num_base_bdevs_operational": 3, 00:16:15.135 "base_bdevs_list": [ 00:16:15.135 { 00:16:15.135 "name": "BaseBdev1", 00:16:15.135 "uuid": "f27ade32-7011-4c76-a781-f4be74dbe74d", 00:16:15.135 "is_configured": true, 00:16:15.135 "data_offset": 0, 00:16:15.135 "data_size": 65536 00:16:15.135 }, 00:16:15.135 { 00:16:15.135 "name": "BaseBdev2", 00:16:15.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.135 "is_configured": false, 00:16:15.135 "data_offset": 0, 00:16:15.135 "data_size": 0 00:16:15.135 }, 00:16:15.135 { 00:16:15.135 "name": "BaseBdev3", 00:16:15.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.135 "is_configured": false, 00:16:15.135 "data_offset": 0, 00:16:15.135 "data_size": 0 00:16:15.135 } 00:16:15.135 ] 00:16:15.135 }' 00:16:15.135 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.135 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.395 [2024-11-06 09:09:14.379612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.395 [2024-11-06 09:09:14.379827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.395 [2024-11-06 09:09:14.391669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.395 [2024-11-06 09:09:14.393945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.395 [2024-11-06 09:09:14.394137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.395 [2024-11-06 09:09:14.394164] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.395 [2024-11-06 09:09:14.394179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.395 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.654 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.654 "name": "Existed_Raid", 00:16:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.654 "strip_size_kb": 64, 00:16:15.654 "state": "configuring", 00:16:15.654 "raid_level": "concat", 00:16:15.654 "superblock": false, 00:16:15.654 "num_base_bdevs": 3, 00:16:15.654 "num_base_bdevs_discovered": 1, 00:16:15.654 "num_base_bdevs_operational": 3, 00:16:15.654 "base_bdevs_list": [ 00:16:15.654 { 00:16:15.654 "name": "BaseBdev1", 00:16:15.654 "uuid": "f27ade32-7011-4c76-a781-f4be74dbe74d", 00:16:15.654 "is_configured": true, 00:16:15.654 "data_offset": 0, 00:16:15.654 "data_size": 65536 00:16:15.654 }, 00:16:15.654 { 00:16:15.654 "name": "BaseBdev2", 00:16:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.654 "is_configured": false, 00:16:15.654 "data_offset": 0, 00:16:15.654 "data_size": 0 00:16:15.654 }, 00:16:15.654 { 00:16:15.654 "name": "BaseBdev3", 00:16:15.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.654 "is_configured": false, 00:16:15.654 "data_offset": 0, 00:16:15.654 "data_size": 0 00:16:15.654 } 00:16:15.654 ] 00:16:15.654 }' 00:16:15.654 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.654 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.913 [2024-11-06 09:09:14.885874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.913 BaseBdev2 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.913 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.913 [ 00:16:15.913 { 00:16:15.913 "name": "BaseBdev2", 00:16:15.913 "aliases": [ 00:16:15.913 "1205c056-2988-4bd1-b0b8-e800c0bc41a9" 00:16:15.913 ], 00:16:15.913 "product_name": "Malloc disk", 00:16:15.913 "block_size": 512, 00:16:15.913 "num_blocks": 65536, 00:16:15.913 "uuid": "1205c056-2988-4bd1-b0b8-e800c0bc41a9", 00:16:15.913 "assigned_rate_limits": { 00:16:15.913 "rw_ios_per_sec": 0, 00:16:15.913 "rw_mbytes_per_sec": 0, 00:16:15.913 "r_mbytes_per_sec": 0, 00:16:15.913 "w_mbytes_per_sec": 0 00:16:15.913 }, 00:16:15.913 "claimed": true, 00:16:15.913 "claim_type": "exclusive_write", 00:16:15.913 "zoned": false, 00:16:15.913 "supported_io_types": { 00:16:15.913 "read": true, 00:16:15.913 "write": true, 00:16:15.913 "unmap": true, 00:16:15.913 "flush": true, 00:16:15.913 "reset": true, 00:16:15.913 "nvme_admin": false, 00:16:15.913 "nvme_io": false, 00:16:15.913 "nvme_io_md": false, 00:16:15.913 "write_zeroes": true, 00:16:15.913 "zcopy": true, 00:16:15.913 "get_zone_info": false, 00:16:15.913 "zone_management": false, 00:16:15.913 "zone_append": false, 00:16:15.913 "compare": false, 00:16:15.913 "compare_and_write": false, 00:16:15.913 "abort": true, 00:16:15.913 "seek_hole": false, 00:16:15.914 "seek_data": false, 00:16:15.914 "copy": true, 00:16:15.914 "nvme_iov_md": false 00:16:15.914 }, 00:16:15.914 "memory_domains": [ 00:16:15.914 { 00:16:15.914 "dma_device_id": "system", 00:16:15.914 "dma_device_type": 1 00:16:15.914 }, 00:16:15.914 { 00:16:15.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.914 "dma_device_type": 2 00:16:15.914 } 00:16:15.914 ], 00:16:15.914 "driver_specific": {} 00:16:15.914 } 00:16:15.914 ] 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.914 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.173 "name": "Existed_Raid", 00:16:16.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.173 "strip_size_kb": 64, 00:16:16.173 "state": "configuring", 00:16:16.173 "raid_level": "concat", 00:16:16.173 "superblock": false, 00:16:16.173 "num_base_bdevs": 3, 00:16:16.173 "num_base_bdevs_discovered": 2, 00:16:16.173 "num_base_bdevs_operational": 3, 00:16:16.173 "base_bdevs_list": [ 00:16:16.173 { 00:16:16.173 "name": "BaseBdev1", 00:16:16.173 "uuid": "f27ade32-7011-4c76-a781-f4be74dbe74d", 00:16:16.173 "is_configured": true, 00:16:16.173 "data_offset": 0, 00:16:16.173 "data_size": 65536 00:16:16.173 }, 00:16:16.173 { 00:16:16.173 "name": "BaseBdev2", 00:16:16.173 "uuid": "1205c056-2988-4bd1-b0b8-e800c0bc41a9", 00:16:16.173 "is_configured": true, 00:16:16.173 "data_offset": 0, 00:16:16.173 "data_size": 65536 00:16:16.173 }, 00:16:16.173 { 00:16:16.173 "name": "BaseBdev3", 00:16:16.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.173 "is_configured": false, 00:16:16.173 "data_offset": 0, 00:16:16.173 "data_size": 0 00:16:16.173 } 00:16:16.173 ] 00:16:16.173 }' 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.173 09:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 [2024-11-06 09:09:15.460037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.432 [2024-11-06 09:09:15.460098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:16.432 [2024-11-06 09:09:15.460114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:16.432 [2024-11-06 09:09:15.460512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:16.432 [2024-11-06 09:09:15.460708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:16.432 [2024-11-06 09:09:15.460721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:16.432 [2024-11-06 09:09:15.461001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.432 BaseBdev3 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.432 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.690 [ 00:16:16.690 { 00:16:16.690 "name": "BaseBdev3", 00:16:16.690 "aliases": [ 00:16:16.690 "d41f499a-2351-4b21-b6e0-8fbf7105015d" 00:16:16.690 ], 00:16:16.690 "product_name": "Malloc disk", 00:16:16.690 "block_size": 512, 00:16:16.690 "num_blocks": 65536, 00:16:16.690 "uuid": "d41f499a-2351-4b21-b6e0-8fbf7105015d", 00:16:16.690 "assigned_rate_limits": { 00:16:16.690 "rw_ios_per_sec": 0, 00:16:16.690 "rw_mbytes_per_sec": 0, 00:16:16.690 "r_mbytes_per_sec": 0, 00:16:16.690 "w_mbytes_per_sec": 0 00:16:16.690 }, 00:16:16.690 "claimed": true, 00:16:16.690 "claim_type": "exclusive_write", 00:16:16.690 "zoned": false, 00:16:16.690 "supported_io_types": { 00:16:16.690 "read": true, 00:16:16.690 "write": true, 00:16:16.690 "unmap": true, 00:16:16.690 "flush": true, 00:16:16.690 "reset": true, 00:16:16.690 "nvme_admin": false, 00:16:16.690 "nvme_io": false, 00:16:16.690 "nvme_io_md": false, 00:16:16.690 "write_zeroes": true, 00:16:16.690 "zcopy": true, 00:16:16.690 "get_zone_info": false, 00:16:16.690 "zone_management": false, 00:16:16.690 "zone_append": false, 00:16:16.690 "compare": false, 00:16:16.690 "compare_and_write": false, 00:16:16.690 "abort": true, 00:16:16.690 "seek_hole": false, 00:16:16.690 "seek_data": false, 00:16:16.690 "copy": true, 00:16:16.690 "nvme_iov_md": false 00:16:16.690 }, 00:16:16.690 "memory_domains": [ 00:16:16.690 { 00:16:16.690 "dma_device_id": "system", 00:16:16.690 "dma_device_type": 1 00:16:16.690 }, 00:16:16.690 { 00:16:16.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.690 "dma_device_type": 2 00:16:16.690 } 00:16:16.690 ], 00:16:16.690 "driver_specific": {} 00:16:16.690 } 00:16:16.690 ] 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.690 "name": "Existed_Raid", 00:16:16.690 "uuid": "cb121ead-280b-4507-bd77-20d897626928", 00:16:16.690 "strip_size_kb": 64, 00:16:16.690 "state": "online", 00:16:16.690 "raid_level": "concat", 00:16:16.690 "superblock": false, 00:16:16.690 "num_base_bdevs": 3, 00:16:16.690 "num_base_bdevs_discovered": 3, 00:16:16.690 "num_base_bdevs_operational": 3, 00:16:16.690 "base_bdevs_list": [ 00:16:16.690 { 00:16:16.690 "name": "BaseBdev1", 00:16:16.690 "uuid": "f27ade32-7011-4c76-a781-f4be74dbe74d", 00:16:16.690 "is_configured": true, 00:16:16.690 "data_offset": 0, 00:16:16.690 "data_size": 65536 00:16:16.690 }, 00:16:16.690 { 00:16:16.690 "name": "BaseBdev2", 00:16:16.690 "uuid": "1205c056-2988-4bd1-b0b8-e800c0bc41a9", 00:16:16.690 "is_configured": true, 00:16:16.690 "data_offset": 0, 00:16:16.690 "data_size": 65536 00:16:16.690 }, 00:16:16.690 { 00:16:16.690 "name": "BaseBdev3", 00:16:16.690 "uuid": "d41f499a-2351-4b21-b6e0-8fbf7105015d", 00:16:16.690 "is_configured": true, 00:16:16.690 "data_offset": 0, 00:16:16.690 "data_size": 65536 00:16:16.690 } 00:16:16.690 ] 00:16:16.690 }' 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.690 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.948 [2024-11-06 09:09:15.943765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.948 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.948 "name": "Existed_Raid", 00:16:16.948 "aliases": [ 00:16:16.948 "cb121ead-280b-4507-bd77-20d897626928" 00:16:16.948 ], 00:16:16.948 "product_name": "Raid Volume", 00:16:16.948 "block_size": 512, 00:16:16.948 "num_blocks": 196608, 00:16:16.948 "uuid": "cb121ead-280b-4507-bd77-20d897626928", 00:16:16.948 "assigned_rate_limits": { 00:16:16.948 "rw_ios_per_sec": 0, 00:16:16.948 "rw_mbytes_per_sec": 0, 00:16:16.948 "r_mbytes_per_sec": 0, 00:16:16.948 "w_mbytes_per_sec": 0 00:16:16.948 }, 00:16:16.948 "claimed": false, 00:16:16.948 "zoned": false, 00:16:16.948 "supported_io_types": { 00:16:16.948 "read": true, 00:16:16.948 "write": true, 00:16:16.948 "unmap": true, 00:16:16.948 "flush": true, 00:16:16.948 "reset": true, 00:16:16.948 "nvme_admin": false, 00:16:16.948 "nvme_io": false, 00:16:16.949 "nvme_io_md": false, 00:16:16.949 "write_zeroes": true, 00:16:16.949 "zcopy": false, 00:16:16.949 "get_zone_info": false, 00:16:16.949 "zone_management": false, 00:16:16.949 "zone_append": false, 00:16:16.949 "compare": false, 00:16:16.949 "compare_and_write": false, 00:16:16.949 "abort": false, 00:16:16.949 "seek_hole": false, 00:16:16.949 "seek_data": false, 00:16:16.949 "copy": false, 00:16:16.949 "nvme_iov_md": false 00:16:16.949 }, 00:16:16.949 "memory_domains": [ 00:16:16.949 { 00:16:16.949 "dma_device_id": "system", 00:16:16.949 "dma_device_type": 1 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.949 "dma_device_type": 2 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "dma_device_id": "system", 00:16:16.949 "dma_device_type": 1 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.949 "dma_device_type": 2 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "dma_device_id": "system", 00:16:16.949 "dma_device_type": 1 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.949 "dma_device_type": 2 00:16:16.949 } 00:16:16.949 ], 00:16:16.949 "driver_specific": { 00:16:16.949 "raid": { 00:16:16.949 "uuid": "cb121ead-280b-4507-bd77-20d897626928", 00:16:16.949 "strip_size_kb": 64, 00:16:16.949 "state": "online", 00:16:16.949 "raid_level": "concat", 00:16:16.949 "superblock": false, 00:16:16.949 "num_base_bdevs": 3, 00:16:16.949 "num_base_bdevs_discovered": 3, 00:16:16.949 "num_base_bdevs_operational": 3, 00:16:16.949 "base_bdevs_list": [ 00:16:16.949 { 00:16:16.949 "name": "BaseBdev1", 00:16:16.949 "uuid": "f27ade32-7011-4c76-a781-f4be74dbe74d", 00:16:16.949 "is_configured": true, 00:16:16.949 "data_offset": 0, 00:16:16.949 "data_size": 65536 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "name": "BaseBdev2", 00:16:16.949 "uuid": "1205c056-2988-4bd1-b0b8-e800c0bc41a9", 00:16:16.949 "is_configured": true, 00:16:16.949 "data_offset": 0, 00:16:16.949 "data_size": 65536 00:16:16.949 }, 00:16:16.949 { 00:16:16.949 "name": "BaseBdev3", 00:16:16.949 "uuid": "d41f499a-2351-4b21-b6e0-8fbf7105015d", 00:16:16.949 "is_configured": true, 00:16:16.949 "data_offset": 0, 00:16:16.949 "data_size": 65536 00:16:16.949 } 00:16:16.949 ] 00:16:16.949 } 00:16:16.949 } 00:16:16.949 }' 00:16:16.949 09:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:17.208 BaseBdev2 00:16:17.208 BaseBdev3' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:17.208 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.467 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.467 [2024-11-06 09:09:16.251156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:17.467 [2024-11-06 09:09:16.251191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.467 [2024-11-06 09:09:16.251252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.467 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.467 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:17.467 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:17.467 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.468 "name": "Existed_Raid", 00:16:17.468 "uuid": "cb121ead-280b-4507-bd77-20d897626928", 00:16:17.468 "strip_size_kb": 64, 00:16:17.468 "state": "offline", 00:16:17.468 "raid_level": "concat", 00:16:17.468 "superblock": false, 00:16:17.468 "num_base_bdevs": 3, 00:16:17.468 "num_base_bdevs_discovered": 2, 00:16:17.468 "num_base_bdevs_operational": 2, 00:16:17.468 "base_bdevs_list": [ 00:16:17.468 { 00:16:17.468 "name": null, 00:16:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.468 "is_configured": false, 00:16:17.468 "data_offset": 0, 00:16:17.468 "data_size": 65536 00:16:17.468 }, 00:16:17.468 { 00:16:17.468 "name": "BaseBdev2", 00:16:17.468 "uuid": "1205c056-2988-4bd1-b0b8-e800c0bc41a9", 00:16:17.468 "is_configured": true, 00:16:17.468 "data_offset": 0, 00:16:17.468 "data_size": 65536 00:16:17.468 }, 00:16:17.468 { 00:16:17.468 "name": "BaseBdev3", 00:16:17.468 "uuid": "d41f499a-2351-4b21-b6e0-8fbf7105015d", 00:16:17.468 "is_configured": true, 00:16:17.468 "data_offset": 0, 00:16:17.468 "data_size": 65536 00:16:17.468 } 00:16:17.468 ] 00:16:17.468 }' 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.468 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.036 [2024-11-06 09:09:16.847955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.036 09:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.036 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:18.036 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.036 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:18.036 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.036 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.036 [2024-11-06 09:09:17.009612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.036 [2024-11-06 09:09:17.009697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.295 BaseBdev2 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.295 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.295 [ 00:16:18.295 { 00:16:18.295 "name": "BaseBdev2", 00:16:18.295 "aliases": [ 00:16:18.295 "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c" 00:16:18.295 ], 00:16:18.295 "product_name": "Malloc disk", 00:16:18.295 "block_size": 512, 00:16:18.295 "num_blocks": 65536, 00:16:18.295 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:18.295 "assigned_rate_limits": { 00:16:18.295 "rw_ios_per_sec": 0, 00:16:18.295 "rw_mbytes_per_sec": 0, 00:16:18.295 "r_mbytes_per_sec": 0, 00:16:18.295 "w_mbytes_per_sec": 0 00:16:18.295 }, 00:16:18.295 "claimed": false, 00:16:18.295 "zoned": false, 00:16:18.295 "supported_io_types": { 00:16:18.295 "read": true, 00:16:18.295 "write": true, 00:16:18.295 "unmap": true, 00:16:18.295 "flush": true, 00:16:18.295 "reset": true, 00:16:18.295 "nvme_admin": false, 00:16:18.295 "nvme_io": false, 00:16:18.295 "nvme_io_md": false, 00:16:18.295 "write_zeroes": true, 00:16:18.295 "zcopy": true, 00:16:18.295 "get_zone_info": false, 00:16:18.295 "zone_management": false, 00:16:18.295 "zone_append": false, 00:16:18.295 "compare": false, 00:16:18.295 "compare_and_write": false, 00:16:18.295 "abort": true, 00:16:18.295 "seek_hole": false, 00:16:18.295 "seek_data": false, 00:16:18.295 "copy": true, 00:16:18.295 "nvme_iov_md": false 00:16:18.295 }, 00:16:18.295 "memory_domains": [ 00:16:18.295 { 00:16:18.295 "dma_device_id": "system", 00:16:18.295 "dma_device_type": 1 00:16:18.295 }, 00:16:18.295 { 00:16:18.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.295 "dma_device_type": 2 00:16:18.295 } 00:16:18.295 ], 00:16:18.295 "driver_specific": {} 00:16:18.295 } 00:16:18.295 ] 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 BaseBdev3 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.296 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 [ 00:16:18.556 { 00:16:18.556 "name": "BaseBdev3", 00:16:18.556 "aliases": [ 00:16:18.556 "5faff589-eb43-4bcc-8def-178609d372d4" 00:16:18.556 ], 00:16:18.556 "product_name": "Malloc disk", 00:16:18.556 "block_size": 512, 00:16:18.556 "num_blocks": 65536, 00:16:18.556 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:18.556 "assigned_rate_limits": { 00:16:18.556 "rw_ios_per_sec": 0, 00:16:18.556 "rw_mbytes_per_sec": 0, 00:16:18.556 "r_mbytes_per_sec": 0, 00:16:18.556 "w_mbytes_per_sec": 0 00:16:18.556 }, 00:16:18.556 "claimed": false, 00:16:18.556 "zoned": false, 00:16:18.556 "supported_io_types": { 00:16:18.556 "read": true, 00:16:18.556 "write": true, 00:16:18.556 "unmap": true, 00:16:18.556 "flush": true, 00:16:18.556 "reset": true, 00:16:18.556 "nvme_admin": false, 00:16:18.556 "nvme_io": false, 00:16:18.556 "nvme_io_md": false, 00:16:18.556 "write_zeroes": true, 00:16:18.556 "zcopy": true, 00:16:18.556 "get_zone_info": false, 00:16:18.556 "zone_management": false, 00:16:18.556 "zone_append": false, 00:16:18.556 "compare": false, 00:16:18.556 "compare_and_write": false, 00:16:18.556 "abort": true, 00:16:18.556 "seek_hole": false, 00:16:18.556 "seek_data": false, 00:16:18.556 "copy": true, 00:16:18.556 "nvme_iov_md": false 00:16:18.556 }, 00:16:18.556 "memory_domains": [ 00:16:18.556 { 00:16:18.556 "dma_device_id": "system", 00:16:18.556 "dma_device_type": 1 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.556 "dma_device_type": 2 00:16:18.556 } 00:16:18.556 ], 00:16:18.556 "driver_specific": {} 00:16:18.556 } 00:16:18.556 ] 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 [2024-11-06 09:09:17.366855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.556 [2024-11-06 09:09:17.367142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.556 [2024-11-06 09:09:17.367261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.556 [2024-11-06 09:09:17.369739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.556 "name": "Existed_Raid", 00:16:18.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.556 "strip_size_kb": 64, 00:16:18.556 "state": "configuring", 00:16:18.556 "raid_level": "concat", 00:16:18.556 "superblock": false, 00:16:18.556 "num_base_bdevs": 3, 00:16:18.556 "num_base_bdevs_discovered": 2, 00:16:18.556 "num_base_bdevs_operational": 3, 00:16:18.556 "base_bdevs_list": [ 00:16:18.556 { 00:16:18.556 "name": "BaseBdev1", 00:16:18.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.556 "is_configured": false, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 0 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev2", 00:16:18.556 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev3", 00:16:18.556 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 } 00:16:18.556 ] 00:16:18.556 }' 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.556 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.815 [2024-11-06 09:09:17.826236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.815 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.074 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.074 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.074 "name": "Existed_Raid", 00:16:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.074 "strip_size_kb": 64, 00:16:19.074 "state": "configuring", 00:16:19.074 "raid_level": "concat", 00:16:19.074 "superblock": false, 00:16:19.074 "num_base_bdevs": 3, 00:16:19.074 "num_base_bdevs_discovered": 1, 00:16:19.074 "num_base_bdevs_operational": 3, 00:16:19.074 "base_bdevs_list": [ 00:16:19.074 { 00:16:19.074 "name": "BaseBdev1", 00:16:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.074 "is_configured": false, 00:16:19.074 "data_offset": 0, 00:16:19.074 "data_size": 0 00:16:19.074 }, 00:16:19.074 { 00:16:19.074 "name": null, 00:16:19.074 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:19.074 "is_configured": false, 00:16:19.074 "data_offset": 0, 00:16:19.074 "data_size": 65536 00:16:19.074 }, 00:16:19.074 { 00:16:19.074 "name": "BaseBdev3", 00:16:19.074 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:19.074 "is_configured": true, 00:16:19.074 "data_offset": 0, 00:16:19.074 "data_size": 65536 00:16:19.074 } 00:16:19.074 ] 00:16:19.074 }' 00:16:19.074 09:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.074 09:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.333 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 [2024-11-06 09:09:18.374882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.591 BaseBdev1 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.591 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 [ 00:16:19.591 { 00:16:19.591 "name": "BaseBdev1", 00:16:19.591 "aliases": [ 00:16:19.591 "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9" 00:16:19.592 ], 00:16:19.592 "product_name": "Malloc disk", 00:16:19.592 "block_size": 512, 00:16:19.592 "num_blocks": 65536, 00:16:19.592 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:19.592 "assigned_rate_limits": { 00:16:19.592 "rw_ios_per_sec": 0, 00:16:19.592 "rw_mbytes_per_sec": 0, 00:16:19.592 "r_mbytes_per_sec": 0, 00:16:19.592 "w_mbytes_per_sec": 0 00:16:19.592 }, 00:16:19.592 "claimed": true, 00:16:19.592 "claim_type": "exclusive_write", 00:16:19.592 "zoned": false, 00:16:19.592 "supported_io_types": { 00:16:19.592 "read": true, 00:16:19.592 "write": true, 00:16:19.592 "unmap": true, 00:16:19.592 "flush": true, 00:16:19.592 "reset": true, 00:16:19.592 "nvme_admin": false, 00:16:19.592 "nvme_io": false, 00:16:19.592 "nvme_io_md": false, 00:16:19.592 "write_zeroes": true, 00:16:19.592 "zcopy": true, 00:16:19.592 "get_zone_info": false, 00:16:19.592 "zone_management": false, 00:16:19.592 "zone_append": false, 00:16:19.592 "compare": false, 00:16:19.592 "compare_and_write": false, 00:16:19.592 "abort": true, 00:16:19.592 "seek_hole": false, 00:16:19.592 "seek_data": false, 00:16:19.592 "copy": true, 00:16:19.592 "nvme_iov_md": false 00:16:19.592 }, 00:16:19.592 "memory_domains": [ 00:16:19.592 { 00:16:19.592 "dma_device_id": "system", 00:16:19.592 "dma_device_type": 1 00:16:19.592 }, 00:16:19.592 { 00:16:19.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.592 "dma_device_type": 2 00:16:19.592 } 00:16:19.592 ], 00:16:19.592 "driver_specific": {} 00:16:19.592 } 00:16:19.592 ] 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.592 "name": "Existed_Raid", 00:16:19.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.592 "strip_size_kb": 64, 00:16:19.592 "state": "configuring", 00:16:19.592 "raid_level": "concat", 00:16:19.592 "superblock": false, 00:16:19.592 "num_base_bdevs": 3, 00:16:19.592 "num_base_bdevs_discovered": 2, 00:16:19.592 "num_base_bdevs_operational": 3, 00:16:19.592 "base_bdevs_list": [ 00:16:19.592 { 00:16:19.592 "name": "BaseBdev1", 00:16:19.592 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:19.592 "is_configured": true, 00:16:19.592 "data_offset": 0, 00:16:19.592 "data_size": 65536 00:16:19.592 }, 00:16:19.592 { 00:16:19.592 "name": null, 00:16:19.592 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:19.592 "is_configured": false, 00:16:19.592 "data_offset": 0, 00:16:19.592 "data_size": 65536 00:16:19.592 }, 00:16:19.592 { 00:16:19.592 "name": "BaseBdev3", 00:16:19.592 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:19.592 "is_configured": true, 00:16:19.592 "data_offset": 0, 00:16:19.592 "data_size": 65536 00:16:19.592 } 00:16:19.592 ] 00:16:19.592 }' 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.592 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.850 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.850 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.851 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.851 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:19.851 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.111 [2024-11-06 09:09:18.906261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.111 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.111 "name": "Existed_Raid", 00:16:20.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.111 "strip_size_kb": 64, 00:16:20.111 "state": "configuring", 00:16:20.111 "raid_level": "concat", 00:16:20.111 "superblock": false, 00:16:20.111 "num_base_bdevs": 3, 00:16:20.111 "num_base_bdevs_discovered": 1, 00:16:20.111 "num_base_bdevs_operational": 3, 00:16:20.111 "base_bdevs_list": [ 00:16:20.111 { 00:16:20.112 "name": "BaseBdev1", 00:16:20.112 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:20.112 "is_configured": true, 00:16:20.112 "data_offset": 0, 00:16:20.112 "data_size": 65536 00:16:20.112 }, 00:16:20.112 { 00:16:20.112 "name": null, 00:16:20.112 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:20.112 "is_configured": false, 00:16:20.112 "data_offset": 0, 00:16:20.112 "data_size": 65536 00:16:20.112 }, 00:16:20.112 { 00:16:20.112 "name": null, 00:16:20.112 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:20.112 "is_configured": false, 00:16:20.112 "data_offset": 0, 00:16:20.112 "data_size": 65536 00:16:20.112 } 00:16:20.112 ] 00:16:20.112 }' 00:16:20.112 09:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.112 09:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.370 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:20.370 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.370 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.370 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.629 [2024-11-06 09:09:19.429778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.629 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.629 "name": "Existed_Raid", 00:16:20.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.629 "strip_size_kb": 64, 00:16:20.629 "state": "configuring", 00:16:20.629 "raid_level": "concat", 00:16:20.629 "superblock": false, 00:16:20.629 "num_base_bdevs": 3, 00:16:20.629 "num_base_bdevs_discovered": 2, 00:16:20.629 "num_base_bdevs_operational": 3, 00:16:20.629 "base_bdevs_list": [ 00:16:20.629 { 00:16:20.629 "name": "BaseBdev1", 00:16:20.629 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:20.629 "is_configured": true, 00:16:20.629 "data_offset": 0, 00:16:20.629 "data_size": 65536 00:16:20.629 }, 00:16:20.629 { 00:16:20.629 "name": null, 00:16:20.629 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:20.629 "is_configured": false, 00:16:20.629 "data_offset": 0, 00:16:20.629 "data_size": 65536 00:16:20.629 }, 00:16:20.629 { 00:16:20.630 "name": "BaseBdev3", 00:16:20.630 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:20.630 "is_configured": true, 00:16:20.630 "data_offset": 0, 00:16:20.630 "data_size": 65536 00:16:20.630 } 00:16:20.630 ] 00:16:20.630 }' 00:16:20.630 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.630 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.887 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.887 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.887 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.887 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:20.887 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.146 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:21.146 09:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:21.146 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.146 09:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.146 [2024-11-06 09:09:19.929773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.146 "name": "Existed_Raid", 00:16:21.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.146 "strip_size_kb": 64, 00:16:21.146 "state": "configuring", 00:16:21.146 "raid_level": "concat", 00:16:21.146 "superblock": false, 00:16:21.146 "num_base_bdevs": 3, 00:16:21.146 "num_base_bdevs_discovered": 1, 00:16:21.146 "num_base_bdevs_operational": 3, 00:16:21.146 "base_bdevs_list": [ 00:16:21.146 { 00:16:21.146 "name": null, 00:16:21.146 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:21.146 "is_configured": false, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 65536 00:16:21.146 }, 00:16:21.146 { 00:16:21.146 "name": null, 00:16:21.146 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:21.146 "is_configured": false, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 65536 00:16:21.146 }, 00:16:21.146 { 00:16:21.146 "name": "BaseBdev3", 00:16:21.146 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:21.146 "is_configured": true, 00:16:21.146 "data_offset": 0, 00:16:21.146 "data_size": 65536 00:16:21.146 } 00:16:21.146 ] 00:16:21.146 }' 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.146 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.713 [2024-11-06 09:09:20.583252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.713 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.713 "name": "Existed_Raid", 00:16:21.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.713 "strip_size_kb": 64, 00:16:21.713 "state": "configuring", 00:16:21.713 "raid_level": "concat", 00:16:21.713 "superblock": false, 00:16:21.713 "num_base_bdevs": 3, 00:16:21.713 "num_base_bdevs_discovered": 2, 00:16:21.713 "num_base_bdevs_operational": 3, 00:16:21.713 "base_bdevs_list": [ 00:16:21.713 { 00:16:21.713 "name": null, 00:16:21.713 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:21.713 "is_configured": false, 00:16:21.713 "data_offset": 0, 00:16:21.713 "data_size": 65536 00:16:21.713 }, 00:16:21.713 { 00:16:21.713 "name": "BaseBdev2", 00:16:21.713 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:21.713 "is_configured": true, 00:16:21.713 "data_offset": 0, 00:16:21.713 "data_size": 65536 00:16:21.713 }, 00:16:21.713 { 00:16:21.714 "name": "BaseBdev3", 00:16:21.714 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:21.714 "is_configured": true, 00:16:21.714 "data_offset": 0, 00:16:21.714 "data_size": 65536 00:16:21.714 } 00:16:21.714 ] 00:16:21.714 }' 00:16:21.714 09:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.714 09:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.282 [2024-11-06 09:09:21.178704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:22.282 [2024-11-06 09:09:21.178962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:22.282 [2024-11-06 09:09:21.178997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:22.282 [2024-11-06 09:09:21.179337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:22.282 [2024-11-06 09:09:21.179496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:22.282 [2024-11-06 09:09:21.179508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:22.282 [2024-11-06 09:09:21.179794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.282 NewBaseBdev 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.282 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.282 [ 00:16:22.282 { 00:16:22.282 "name": "NewBaseBdev", 00:16:22.282 "aliases": [ 00:16:22.282 "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9" 00:16:22.282 ], 00:16:22.282 "product_name": "Malloc disk", 00:16:22.282 "block_size": 512, 00:16:22.282 "num_blocks": 65536, 00:16:22.282 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:22.282 "assigned_rate_limits": { 00:16:22.282 "rw_ios_per_sec": 0, 00:16:22.282 "rw_mbytes_per_sec": 0, 00:16:22.282 "r_mbytes_per_sec": 0, 00:16:22.282 "w_mbytes_per_sec": 0 00:16:22.282 }, 00:16:22.282 "claimed": true, 00:16:22.282 "claim_type": "exclusive_write", 00:16:22.282 "zoned": false, 00:16:22.282 "supported_io_types": { 00:16:22.283 "read": true, 00:16:22.283 "write": true, 00:16:22.283 "unmap": true, 00:16:22.283 "flush": true, 00:16:22.283 "reset": true, 00:16:22.283 "nvme_admin": false, 00:16:22.283 "nvme_io": false, 00:16:22.283 "nvme_io_md": false, 00:16:22.283 "write_zeroes": true, 00:16:22.283 "zcopy": true, 00:16:22.283 "get_zone_info": false, 00:16:22.283 "zone_management": false, 00:16:22.283 "zone_append": false, 00:16:22.283 "compare": false, 00:16:22.283 "compare_and_write": false, 00:16:22.283 "abort": true, 00:16:22.283 "seek_hole": false, 00:16:22.283 "seek_data": false, 00:16:22.283 "copy": true, 00:16:22.283 "nvme_iov_md": false 00:16:22.283 }, 00:16:22.283 "memory_domains": [ 00:16:22.283 { 00:16:22.283 "dma_device_id": "system", 00:16:22.283 "dma_device_type": 1 00:16:22.283 }, 00:16:22.283 { 00:16:22.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.283 "dma_device_type": 2 00:16:22.283 } 00:16:22.283 ], 00:16:22.283 "driver_specific": {} 00:16:22.283 } 00:16:22.283 ] 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.283 "name": "Existed_Raid", 00:16:22.283 "uuid": "75c871c2-00a2-49c6-8ae6-6e7ee48abd98", 00:16:22.283 "strip_size_kb": 64, 00:16:22.283 "state": "online", 00:16:22.283 "raid_level": "concat", 00:16:22.283 "superblock": false, 00:16:22.283 "num_base_bdevs": 3, 00:16:22.283 "num_base_bdevs_discovered": 3, 00:16:22.283 "num_base_bdevs_operational": 3, 00:16:22.283 "base_bdevs_list": [ 00:16:22.283 { 00:16:22.283 "name": "NewBaseBdev", 00:16:22.283 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:22.283 "is_configured": true, 00:16:22.283 "data_offset": 0, 00:16:22.283 "data_size": 65536 00:16:22.283 }, 00:16:22.283 { 00:16:22.283 "name": "BaseBdev2", 00:16:22.283 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:22.283 "is_configured": true, 00:16:22.283 "data_offset": 0, 00:16:22.283 "data_size": 65536 00:16:22.283 }, 00:16:22.283 { 00:16:22.283 "name": "BaseBdev3", 00:16:22.283 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:22.283 "is_configured": true, 00:16:22.283 "data_offset": 0, 00:16:22.283 "data_size": 65536 00:16:22.283 } 00:16:22.283 ] 00:16:22.283 }' 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.283 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.851 [2024-11-06 09:09:21.666404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.851 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.851 "name": "Existed_Raid", 00:16:22.851 "aliases": [ 00:16:22.851 "75c871c2-00a2-49c6-8ae6-6e7ee48abd98" 00:16:22.851 ], 00:16:22.851 "product_name": "Raid Volume", 00:16:22.851 "block_size": 512, 00:16:22.851 "num_blocks": 196608, 00:16:22.851 "uuid": "75c871c2-00a2-49c6-8ae6-6e7ee48abd98", 00:16:22.851 "assigned_rate_limits": { 00:16:22.851 "rw_ios_per_sec": 0, 00:16:22.851 "rw_mbytes_per_sec": 0, 00:16:22.851 "r_mbytes_per_sec": 0, 00:16:22.851 "w_mbytes_per_sec": 0 00:16:22.851 }, 00:16:22.851 "claimed": false, 00:16:22.851 "zoned": false, 00:16:22.851 "supported_io_types": { 00:16:22.851 "read": true, 00:16:22.851 "write": true, 00:16:22.851 "unmap": true, 00:16:22.851 "flush": true, 00:16:22.851 "reset": true, 00:16:22.851 "nvme_admin": false, 00:16:22.851 "nvme_io": false, 00:16:22.851 "nvme_io_md": false, 00:16:22.851 "write_zeroes": true, 00:16:22.851 "zcopy": false, 00:16:22.851 "get_zone_info": false, 00:16:22.851 "zone_management": false, 00:16:22.851 "zone_append": false, 00:16:22.851 "compare": false, 00:16:22.851 "compare_and_write": false, 00:16:22.851 "abort": false, 00:16:22.851 "seek_hole": false, 00:16:22.851 "seek_data": false, 00:16:22.851 "copy": false, 00:16:22.851 "nvme_iov_md": false 00:16:22.851 }, 00:16:22.851 "memory_domains": [ 00:16:22.851 { 00:16:22.851 "dma_device_id": "system", 00:16:22.851 "dma_device_type": 1 00:16:22.851 }, 00:16:22.851 { 00:16:22.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.851 "dma_device_type": 2 00:16:22.851 }, 00:16:22.851 { 00:16:22.851 "dma_device_id": "system", 00:16:22.851 "dma_device_type": 1 00:16:22.851 }, 00:16:22.851 { 00:16:22.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.851 "dma_device_type": 2 00:16:22.851 }, 00:16:22.851 { 00:16:22.851 "dma_device_id": "system", 00:16:22.851 "dma_device_type": 1 00:16:22.851 }, 00:16:22.851 { 00:16:22.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.851 "dma_device_type": 2 00:16:22.851 } 00:16:22.851 ], 00:16:22.851 "driver_specific": { 00:16:22.851 "raid": { 00:16:22.851 "uuid": "75c871c2-00a2-49c6-8ae6-6e7ee48abd98", 00:16:22.851 "strip_size_kb": 64, 00:16:22.851 "state": "online", 00:16:22.851 "raid_level": "concat", 00:16:22.851 "superblock": false, 00:16:22.851 "num_base_bdevs": 3, 00:16:22.851 "num_base_bdevs_discovered": 3, 00:16:22.851 "num_base_bdevs_operational": 3, 00:16:22.851 "base_bdevs_list": [ 00:16:22.851 { 00:16:22.851 "name": "NewBaseBdev", 00:16:22.851 "uuid": "7dbd6ffb-8e79-4c75-b69a-6f1d145d04c9", 00:16:22.851 "is_configured": true, 00:16:22.851 "data_offset": 0, 00:16:22.851 "data_size": 65536 00:16:22.851 }, 00:16:22.851 { 00:16:22.851 "name": "BaseBdev2", 00:16:22.851 "uuid": "61f4d2e5-78b2-4d1f-9b9f-0eddbdcbae1c", 00:16:22.851 "is_configured": true, 00:16:22.852 "data_offset": 0, 00:16:22.852 "data_size": 65536 00:16:22.852 }, 00:16:22.852 { 00:16:22.852 "name": "BaseBdev3", 00:16:22.852 "uuid": "5faff589-eb43-4bcc-8def-178609d372d4", 00:16:22.852 "is_configured": true, 00:16:22.852 "data_offset": 0, 00:16:22.852 "data_size": 65536 00:16:22.852 } 00:16:22.852 ] 00:16:22.852 } 00:16:22.852 } 00:16:22.852 }' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:22.852 BaseBdev2 00:16:22.852 BaseBdev3' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.852 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.111 [2024-11-06 09:09:21.933716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.111 [2024-11-06 09:09:21.933764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.111 [2024-11-06 09:09:21.933853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.111 [2024-11-06 09:09:21.933911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.111 [2024-11-06 09:09:21.933925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65389 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65389 ']' 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65389 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65389 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.111 killing process with pid 65389 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65389' 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65389 00:16:23.111 [2024-11-06 09:09:21.980649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.111 09:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65389 00:16:23.378 [2024-11-06 09:09:22.286932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:24.782 00:16:24.782 real 0m11.069s 00:16:24.782 user 0m17.542s 00:16:24.782 sys 0m2.244s 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.782 ************************************ 00:16:24.782 END TEST raid_state_function_test 00:16:24.782 ************************************ 00:16:24.782 09:09:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:24.782 09:09:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:24.782 09:09:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:24.782 09:09:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.782 ************************************ 00:16:24.782 START TEST raid_state_function_test_sb 00:16:24.782 ************************************ 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:24.782 Process raid pid: 66016 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66016 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66016' 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66016 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66016 ']' 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:24.782 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.782 [2024-11-06 09:09:23.621607] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:24.782 [2024-11-06 09:09:23.622034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.782 [2024-11-06 09:09:23.812339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.046 [2024-11-06 09:09:23.935734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.305 [2024-11-06 09:09:24.150960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.305 [2024-11-06 09:09:24.151197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.564 [2024-11-06 09:09:24.477823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.564 [2024-11-06 09:09:24.478119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.564 [2024-11-06 09:09:24.478146] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.564 [2024-11-06 09:09:24.478162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.564 [2024-11-06 09:09:24.478170] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.564 [2024-11-06 09:09:24.478183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.564 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.564 "name": "Existed_Raid", 00:16:25.564 "uuid": "51bdd0b5-310c-43e5-8c29-6c5d3369ff68", 00:16:25.564 "strip_size_kb": 64, 00:16:25.564 "state": "configuring", 00:16:25.564 "raid_level": "concat", 00:16:25.564 "superblock": true, 00:16:25.564 "num_base_bdevs": 3, 00:16:25.564 "num_base_bdevs_discovered": 0, 00:16:25.564 "num_base_bdevs_operational": 3, 00:16:25.564 "base_bdevs_list": [ 00:16:25.564 { 00:16:25.564 "name": "BaseBdev1", 00:16:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.564 "is_configured": false, 00:16:25.564 "data_offset": 0, 00:16:25.564 "data_size": 0 00:16:25.564 }, 00:16:25.564 { 00:16:25.564 "name": "BaseBdev2", 00:16:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.564 "is_configured": false, 00:16:25.564 "data_offset": 0, 00:16:25.564 "data_size": 0 00:16:25.564 }, 00:16:25.564 { 00:16:25.564 "name": "BaseBdev3", 00:16:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.564 "is_configured": false, 00:16:25.564 "data_offset": 0, 00:16:25.564 "data_size": 0 00:16:25.564 } 00:16:25.565 ] 00:16:25.565 }' 00:16:25.565 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.565 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 [2024-11-06 09:09:24.929766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.133 [2024-11-06 09:09:24.929837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 [2024-11-06 09:09:24.941774] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.133 [2024-11-06 09:09:24.941835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.133 [2024-11-06 09:09:24.941847] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.133 [2024-11-06 09:09:24.941861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.133 [2024-11-06 09:09:24.941870] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.133 [2024-11-06 09:09:24.941883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 [2024-11-06 09:09:24.991762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.133 BaseBdev1 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.133 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 [ 00:16:26.133 { 00:16:26.133 "name": "BaseBdev1", 00:16:26.133 "aliases": [ 00:16:26.133 "28ab4160-d7bd-4ba0-af8e-7066eae37d97" 00:16:26.133 ], 00:16:26.133 "product_name": "Malloc disk", 00:16:26.133 "block_size": 512, 00:16:26.133 "num_blocks": 65536, 00:16:26.133 "uuid": "28ab4160-d7bd-4ba0-af8e-7066eae37d97", 00:16:26.133 "assigned_rate_limits": { 00:16:26.133 "rw_ios_per_sec": 0, 00:16:26.133 "rw_mbytes_per_sec": 0, 00:16:26.133 "r_mbytes_per_sec": 0, 00:16:26.133 "w_mbytes_per_sec": 0 00:16:26.133 }, 00:16:26.133 "claimed": true, 00:16:26.133 "claim_type": "exclusive_write", 00:16:26.133 "zoned": false, 00:16:26.133 "supported_io_types": { 00:16:26.133 "read": true, 00:16:26.133 "write": true, 00:16:26.133 "unmap": true, 00:16:26.133 "flush": true, 00:16:26.133 "reset": true, 00:16:26.133 "nvme_admin": false, 00:16:26.133 "nvme_io": false, 00:16:26.133 "nvme_io_md": false, 00:16:26.133 "write_zeroes": true, 00:16:26.133 "zcopy": true, 00:16:26.133 "get_zone_info": false, 00:16:26.133 "zone_management": false, 00:16:26.133 "zone_append": false, 00:16:26.133 "compare": false, 00:16:26.133 "compare_and_write": false, 00:16:26.133 "abort": true, 00:16:26.133 "seek_hole": false, 00:16:26.133 "seek_data": false, 00:16:26.133 "copy": true, 00:16:26.133 "nvme_iov_md": false 00:16:26.133 }, 00:16:26.133 "memory_domains": [ 00:16:26.133 { 00:16:26.133 "dma_device_id": "system", 00:16:26.133 "dma_device_type": 1 00:16:26.133 }, 00:16:26.133 { 00:16:26.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.133 "dma_device_type": 2 00:16:26.133 } 00:16:26.133 ], 00:16:26.133 "driver_specific": {} 00:16:26.133 } 00:16:26.133 ] 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.133 "name": "Existed_Raid", 00:16:26.133 "uuid": "3fefcd2a-7da2-4caa-a6bd-6ea67cc572f9", 00:16:26.133 "strip_size_kb": 64, 00:16:26.133 "state": "configuring", 00:16:26.133 "raid_level": "concat", 00:16:26.133 "superblock": true, 00:16:26.133 "num_base_bdevs": 3, 00:16:26.133 "num_base_bdevs_discovered": 1, 00:16:26.133 "num_base_bdevs_operational": 3, 00:16:26.133 "base_bdevs_list": [ 00:16:26.133 { 00:16:26.133 "name": "BaseBdev1", 00:16:26.133 "uuid": "28ab4160-d7bd-4ba0-af8e-7066eae37d97", 00:16:26.133 "is_configured": true, 00:16:26.133 "data_offset": 2048, 00:16:26.133 "data_size": 63488 00:16:26.133 }, 00:16:26.133 { 00:16:26.133 "name": "BaseBdev2", 00:16:26.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.133 "is_configured": false, 00:16:26.133 "data_offset": 0, 00:16:26.133 "data_size": 0 00:16:26.133 }, 00:16:26.133 { 00:16:26.133 "name": "BaseBdev3", 00:16:26.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.133 "is_configured": false, 00:16:26.133 "data_offset": 0, 00:16:26.133 "data_size": 0 00:16:26.133 } 00:16:26.133 ] 00:16:26.133 }' 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.133 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.391 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.392 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.392 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.392 [2024-11-06 09:09:25.419267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.392 [2024-11-06 09:09:25.419364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:26.392 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.392 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:26.392 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.392 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.651 [2024-11-06 09:09:25.431340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.651 [2024-11-06 09:09:25.433640] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.651 [2024-11-06 09:09:25.433692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.651 [2024-11-06 09:09:25.433704] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.651 [2024-11-06 09:09:25.433717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.651 "name": "Existed_Raid", 00:16:26.651 "uuid": "8b6596e5-14ec-4310-9261-0cc2d38b2abb", 00:16:26.651 "strip_size_kb": 64, 00:16:26.651 "state": "configuring", 00:16:26.651 "raid_level": "concat", 00:16:26.651 "superblock": true, 00:16:26.651 "num_base_bdevs": 3, 00:16:26.651 "num_base_bdevs_discovered": 1, 00:16:26.651 "num_base_bdevs_operational": 3, 00:16:26.651 "base_bdevs_list": [ 00:16:26.651 { 00:16:26.651 "name": "BaseBdev1", 00:16:26.651 "uuid": "28ab4160-d7bd-4ba0-af8e-7066eae37d97", 00:16:26.651 "is_configured": true, 00:16:26.651 "data_offset": 2048, 00:16:26.651 "data_size": 63488 00:16:26.651 }, 00:16:26.651 { 00:16:26.651 "name": "BaseBdev2", 00:16:26.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.651 "is_configured": false, 00:16:26.651 "data_offset": 0, 00:16:26.651 "data_size": 0 00:16:26.651 }, 00:16:26.651 { 00:16:26.651 "name": "BaseBdev3", 00:16:26.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.651 "is_configured": false, 00:16:26.651 "data_offset": 0, 00:16:26.651 "data_size": 0 00:16:26.651 } 00:16:26.651 ] 00:16:26.651 }' 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.651 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.910 [2024-11-06 09:09:25.899729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.910 BaseBdev2 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.910 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.910 [ 00:16:26.910 { 00:16:26.910 "name": "BaseBdev2", 00:16:26.910 "aliases": [ 00:16:26.910 "2f67d449-956b-4152-80ff-de0b0d799026" 00:16:26.910 ], 00:16:26.910 "product_name": "Malloc disk", 00:16:26.910 "block_size": 512, 00:16:26.910 "num_blocks": 65536, 00:16:26.910 "uuid": "2f67d449-956b-4152-80ff-de0b0d799026", 00:16:26.910 "assigned_rate_limits": { 00:16:26.910 "rw_ios_per_sec": 0, 00:16:26.910 "rw_mbytes_per_sec": 0, 00:16:26.910 "r_mbytes_per_sec": 0, 00:16:26.910 "w_mbytes_per_sec": 0 00:16:26.910 }, 00:16:26.910 "claimed": true, 00:16:26.910 "claim_type": "exclusive_write", 00:16:26.910 "zoned": false, 00:16:26.910 "supported_io_types": { 00:16:26.910 "read": true, 00:16:26.910 "write": true, 00:16:26.910 "unmap": true, 00:16:26.910 "flush": true, 00:16:26.910 "reset": true, 00:16:26.910 "nvme_admin": false, 00:16:26.910 "nvme_io": false, 00:16:26.910 "nvme_io_md": false, 00:16:26.910 "write_zeroes": true, 00:16:26.910 "zcopy": true, 00:16:26.910 "get_zone_info": false, 00:16:26.910 "zone_management": false, 00:16:27.170 "zone_append": false, 00:16:27.170 "compare": false, 00:16:27.170 "compare_and_write": false, 00:16:27.170 "abort": true, 00:16:27.170 "seek_hole": false, 00:16:27.170 "seek_data": false, 00:16:27.170 "copy": true, 00:16:27.170 "nvme_iov_md": false 00:16:27.170 }, 00:16:27.170 "memory_domains": [ 00:16:27.170 { 00:16:27.170 "dma_device_id": "system", 00:16:27.170 "dma_device_type": 1 00:16:27.170 }, 00:16:27.170 { 00:16:27.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.170 "dma_device_type": 2 00:16:27.170 } 00:16:27.170 ], 00:16:27.170 "driver_specific": {} 00:16:27.170 } 00:16:27.170 ] 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.170 09:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.170 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.170 "name": "Existed_Raid", 00:16:27.170 "uuid": "8b6596e5-14ec-4310-9261-0cc2d38b2abb", 00:16:27.170 "strip_size_kb": 64, 00:16:27.170 "state": "configuring", 00:16:27.170 "raid_level": "concat", 00:16:27.170 "superblock": true, 00:16:27.170 "num_base_bdevs": 3, 00:16:27.170 "num_base_bdevs_discovered": 2, 00:16:27.170 "num_base_bdevs_operational": 3, 00:16:27.170 "base_bdevs_list": [ 00:16:27.170 { 00:16:27.170 "name": "BaseBdev1", 00:16:27.170 "uuid": "28ab4160-d7bd-4ba0-af8e-7066eae37d97", 00:16:27.170 "is_configured": true, 00:16:27.170 "data_offset": 2048, 00:16:27.170 "data_size": 63488 00:16:27.170 }, 00:16:27.170 { 00:16:27.170 "name": "BaseBdev2", 00:16:27.170 "uuid": "2f67d449-956b-4152-80ff-de0b0d799026", 00:16:27.170 "is_configured": true, 00:16:27.170 "data_offset": 2048, 00:16:27.170 "data_size": 63488 00:16:27.170 }, 00:16:27.170 { 00:16:27.170 "name": "BaseBdev3", 00:16:27.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.170 "is_configured": false, 00:16:27.170 "data_offset": 0, 00:16:27.170 "data_size": 0 00:16:27.170 } 00:16:27.170 ] 00:16:27.170 }' 00:16:27.170 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.170 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.429 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.429 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 [2024-11-06 09:09:26.445152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.429 [2024-11-06 09:09:26.445463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:27.429 [2024-11-06 09:09:26.445491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:27.430 [2024-11-06 09:09:26.445792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:27.430 [2024-11-06 09:09:26.445955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:27.430 [2024-11-06 09:09:26.445967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:27.430 [2024-11-06 09:09:26.446118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.430 BaseBdev3 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.430 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [ 00:16:27.689 { 00:16:27.689 "name": "BaseBdev3", 00:16:27.689 "aliases": [ 00:16:27.689 "ff0caa2d-3c3b-430f-a252-5e2e0eb9cee7" 00:16:27.689 ], 00:16:27.689 "product_name": "Malloc disk", 00:16:27.689 "block_size": 512, 00:16:27.689 "num_blocks": 65536, 00:16:27.689 "uuid": "ff0caa2d-3c3b-430f-a252-5e2e0eb9cee7", 00:16:27.689 "assigned_rate_limits": { 00:16:27.689 "rw_ios_per_sec": 0, 00:16:27.689 "rw_mbytes_per_sec": 0, 00:16:27.689 "r_mbytes_per_sec": 0, 00:16:27.689 "w_mbytes_per_sec": 0 00:16:27.689 }, 00:16:27.689 "claimed": true, 00:16:27.689 "claim_type": "exclusive_write", 00:16:27.689 "zoned": false, 00:16:27.689 "supported_io_types": { 00:16:27.689 "read": true, 00:16:27.689 "write": true, 00:16:27.689 "unmap": true, 00:16:27.689 "flush": true, 00:16:27.689 "reset": true, 00:16:27.689 "nvme_admin": false, 00:16:27.689 "nvme_io": false, 00:16:27.689 "nvme_io_md": false, 00:16:27.689 "write_zeroes": true, 00:16:27.689 "zcopy": true, 00:16:27.689 "get_zone_info": false, 00:16:27.689 "zone_management": false, 00:16:27.689 "zone_append": false, 00:16:27.689 "compare": false, 00:16:27.689 "compare_and_write": false, 00:16:27.689 "abort": true, 00:16:27.689 "seek_hole": false, 00:16:27.689 "seek_data": false, 00:16:27.689 "copy": true, 00:16:27.689 "nvme_iov_md": false 00:16:27.690 }, 00:16:27.690 "memory_domains": [ 00:16:27.690 { 00:16:27.690 "dma_device_id": "system", 00:16:27.690 "dma_device_type": 1 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.690 "dma_device_type": 2 00:16:27.690 } 00:16:27.690 ], 00:16:27.690 "driver_specific": {} 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.690 "name": "Existed_Raid", 00:16:27.690 "uuid": "8b6596e5-14ec-4310-9261-0cc2d38b2abb", 00:16:27.690 "strip_size_kb": 64, 00:16:27.690 "state": "online", 00:16:27.690 "raid_level": "concat", 00:16:27.690 "superblock": true, 00:16:27.690 "num_base_bdevs": 3, 00:16:27.690 "num_base_bdevs_discovered": 3, 00:16:27.690 "num_base_bdevs_operational": 3, 00:16:27.690 "base_bdevs_list": [ 00:16:27.690 { 00:16:27.690 "name": "BaseBdev1", 00:16:27.690 "uuid": "28ab4160-d7bd-4ba0-af8e-7066eae37d97", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "BaseBdev2", 00:16:27.690 "uuid": "2f67d449-956b-4152-80ff-de0b0d799026", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "BaseBdev3", 00:16:27.690 "uuid": "ff0caa2d-3c3b-430f-a252-5e2e0eb9cee7", 00:16:27.690 "is_configured": true, 00:16:27.690 "data_offset": 2048, 00:16:27.690 "data_size": 63488 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 }' 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.690 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.949 [2024-11-06 09:09:26.920871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.949 09:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.950 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.950 "name": "Existed_Raid", 00:16:27.950 "aliases": [ 00:16:27.950 "8b6596e5-14ec-4310-9261-0cc2d38b2abb" 00:16:27.950 ], 00:16:27.950 "product_name": "Raid Volume", 00:16:27.950 "block_size": 512, 00:16:27.950 "num_blocks": 190464, 00:16:27.950 "uuid": "8b6596e5-14ec-4310-9261-0cc2d38b2abb", 00:16:27.950 "assigned_rate_limits": { 00:16:27.950 "rw_ios_per_sec": 0, 00:16:27.950 "rw_mbytes_per_sec": 0, 00:16:27.950 "r_mbytes_per_sec": 0, 00:16:27.950 "w_mbytes_per_sec": 0 00:16:27.950 }, 00:16:27.950 "claimed": false, 00:16:27.950 "zoned": false, 00:16:27.950 "supported_io_types": { 00:16:27.950 "read": true, 00:16:27.950 "write": true, 00:16:27.950 "unmap": true, 00:16:27.950 "flush": true, 00:16:27.950 "reset": true, 00:16:27.950 "nvme_admin": false, 00:16:27.950 "nvme_io": false, 00:16:27.950 "nvme_io_md": false, 00:16:27.950 "write_zeroes": true, 00:16:27.950 "zcopy": false, 00:16:27.950 "get_zone_info": false, 00:16:27.950 "zone_management": false, 00:16:27.950 "zone_append": false, 00:16:27.950 "compare": false, 00:16:27.950 "compare_and_write": false, 00:16:27.950 "abort": false, 00:16:27.950 "seek_hole": false, 00:16:27.950 "seek_data": false, 00:16:27.950 "copy": false, 00:16:27.950 "nvme_iov_md": false 00:16:27.950 }, 00:16:27.950 "memory_domains": [ 00:16:27.950 { 00:16:27.950 "dma_device_id": "system", 00:16:27.950 "dma_device_type": 1 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.950 "dma_device_type": 2 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "dma_device_id": "system", 00:16:27.950 "dma_device_type": 1 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.950 "dma_device_type": 2 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "dma_device_id": "system", 00:16:27.950 "dma_device_type": 1 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.950 "dma_device_type": 2 00:16:27.950 } 00:16:27.950 ], 00:16:27.950 "driver_specific": { 00:16:27.950 "raid": { 00:16:27.950 "uuid": "8b6596e5-14ec-4310-9261-0cc2d38b2abb", 00:16:27.950 "strip_size_kb": 64, 00:16:27.950 "state": "online", 00:16:27.950 "raid_level": "concat", 00:16:27.950 "superblock": true, 00:16:27.950 "num_base_bdevs": 3, 00:16:27.950 "num_base_bdevs_discovered": 3, 00:16:27.950 "num_base_bdevs_operational": 3, 00:16:27.950 "base_bdevs_list": [ 00:16:27.950 { 00:16:27.950 "name": "BaseBdev1", 00:16:27.950 "uuid": "28ab4160-d7bd-4ba0-af8e-7066eae37d97", 00:16:27.950 "is_configured": true, 00:16:27.950 "data_offset": 2048, 00:16:27.950 "data_size": 63488 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "name": "BaseBdev2", 00:16:27.950 "uuid": "2f67d449-956b-4152-80ff-de0b0d799026", 00:16:27.950 "is_configured": true, 00:16:27.950 "data_offset": 2048, 00:16:27.950 "data_size": 63488 00:16:27.950 }, 00:16:27.950 { 00:16:27.950 "name": "BaseBdev3", 00:16:27.950 "uuid": "ff0caa2d-3c3b-430f-a252-5e2e0eb9cee7", 00:16:27.950 "is_configured": true, 00:16:27.950 "data_offset": 2048, 00:16:27.950 "data_size": 63488 00:16:27.950 } 00:16:27.950 ] 00:16:27.950 } 00:16:27.950 } 00:16:27.950 }' 00:16:27.950 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.950 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.950 BaseBdev2 00:16:27.950 BaseBdev3' 00:16:28.214 09:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.214 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.214 [2024-11-06 09:09:27.180390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.214 [2024-11-06 09:09:27.180456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.214 [2024-11-06 09:09:27.180513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.472 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.472 "name": "Existed_Raid", 00:16:28.472 "uuid": "8b6596e5-14ec-4310-9261-0cc2d38b2abb", 00:16:28.472 "strip_size_kb": 64, 00:16:28.472 "state": "offline", 00:16:28.472 "raid_level": "concat", 00:16:28.472 "superblock": true, 00:16:28.472 "num_base_bdevs": 3, 00:16:28.473 "num_base_bdevs_discovered": 2, 00:16:28.473 "num_base_bdevs_operational": 2, 00:16:28.473 "base_bdevs_list": [ 00:16:28.473 { 00:16:28.473 "name": null, 00:16:28.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.473 "is_configured": false, 00:16:28.473 "data_offset": 0, 00:16:28.473 "data_size": 63488 00:16:28.473 }, 00:16:28.473 { 00:16:28.473 "name": "BaseBdev2", 00:16:28.473 "uuid": "2f67d449-956b-4152-80ff-de0b0d799026", 00:16:28.473 "is_configured": true, 00:16:28.473 "data_offset": 2048, 00:16:28.473 "data_size": 63488 00:16:28.473 }, 00:16:28.473 { 00:16:28.473 "name": "BaseBdev3", 00:16:28.473 "uuid": "ff0caa2d-3c3b-430f-a252-5e2e0eb9cee7", 00:16:28.473 "is_configured": true, 00:16:28.473 "data_offset": 2048, 00:16:28.473 "data_size": 63488 00:16:28.473 } 00:16:28.473 ] 00:16:28.473 }' 00:16:28.473 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.473 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.730 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.730 [2024-11-06 09:09:27.721790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.988 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.989 [2024-11-06 09:09:27.875095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.989 [2024-11-06 09:09:27.875170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.989 09:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.989 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.248 BaseBdev2 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.248 [ 00:16:29.248 { 00:16:29.248 "name": "BaseBdev2", 00:16:29.248 "aliases": [ 00:16:29.248 "35235991-c76d-459f-a866-39d2e7b55955" 00:16:29.248 ], 00:16:29.248 "product_name": "Malloc disk", 00:16:29.248 "block_size": 512, 00:16:29.248 "num_blocks": 65536, 00:16:29.248 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:29.248 "assigned_rate_limits": { 00:16:29.248 "rw_ios_per_sec": 0, 00:16:29.248 "rw_mbytes_per_sec": 0, 00:16:29.248 "r_mbytes_per_sec": 0, 00:16:29.248 "w_mbytes_per_sec": 0 00:16:29.248 }, 00:16:29.248 "claimed": false, 00:16:29.248 "zoned": false, 00:16:29.248 "supported_io_types": { 00:16:29.248 "read": true, 00:16:29.248 "write": true, 00:16:29.248 "unmap": true, 00:16:29.248 "flush": true, 00:16:29.248 "reset": true, 00:16:29.248 "nvme_admin": false, 00:16:29.248 "nvme_io": false, 00:16:29.248 "nvme_io_md": false, 00:16:29.248 "write_zeroes": true, 00:16:29.248 "zcopy": true, 00:16:29.248 "get_zone_info": false, 00:16:29.248 "zone_management": false, 00:16:29.248 "zone_append": false, 00:16:29.248 "compare": false, 00:16:29.248 "compare_and_write": false, 00:16:29.248 "abort": true, 00:16:29.248 "seek_hole": false, 00:16:29.248 "seek_data": false, 00:16:29.248 "copy": true, 00:16:29.248 "nvme_iov_md": false 00:16:29.248 }, 00:16:29.248 "memory_domains": [ 00:16:29.248 { 00:16:29.248 "dma_device_id": "system", 00:16:29.248 "dma_device_type": 1 00:16:29.248 }, 00:16:29.248 { 00:16:29.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.248 "dma_device_type": 2 00:16:29.248 } 00:16:29.248 ], 00:16:29.248 "driver_specific": {} 00:16:29.248 } 00:16:29.248 ] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.248 BaseBdev3 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.248 [ 00:16:29.248 { 00:16:29.248 "name": "BaseBdev3", 00:16:29.248 "aliases": [ 00:16:29.248 "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef" 00:16:29.248 ], 00:16:29.248 "product_name": "Malloc disk", 00:16:29.248 "block_size": 512, 00:16:29.248 "num_blocks": 65536, 00:16:29.248 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:29.248 "assigned_rate_limits": { 00:16:29.248 "rw_ios_per_sec": 0, 00:16:29.248 "rw_mbytes_per_sec": 0, 00:16:29.248 "r_mbytes_per_sec": 0, 00:16:29.248 "w_mbytes_per_sec": 0 00:16:29.248 }, 00:16:29.248 "claimed": false, 00:16:29.248 "zoned": false, 00:16:29.248 "supported_io_types": { 00:16:29.248 "read": true, 00:16:29.248 "write": true, 00:16:29.248 "unmap": true, 00:16:29.248 "flush": true, 00:16:29.248 "reset": true, 00:16:29.248 "nvme_admin": false, 00:16:29.248 "nvme_io": false, 00:16:29.248 "nvme_io_md": false, 00:16:29.248 "write_zeroes": true, 00:16:29.248 "zcopy": true, 00:16:29.248 "get_zone_info": false, 00:16:29.248 "zone_management": false, 00:16:29.248 "zone_append": false, 00:16:29.248 "compare": false, 00:16:29.248 "compare_and_write": false, 00:16:29.248 "abort": true, 00:16:29.248 "seek_hole": false, 00:16:29.248 "seek_data": false, 00:16:29.248 "copy": true, 00:16:29.248 "nvme_iov_md": false 00:16:29.248 }, 00:16:29.248 "memory_domains": [ 00:16:29.248 { 00:16:29.248 "dma_device_id": "system", 00:16:29.248 "dma_device_type": 1 00:16:29.248 }, 00:16:29.248 { 00:16:29.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.248 "dma_device_type": 2 00:16:29.248 } 00:16:29.248 ], 00:16:29.248 "driver_specific": {} 00:16:29.248 } 00:16:29.248 ] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.248 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.249 [2024-11-06 09:09:28.201596] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.249 [2024-11-06 09:09:28.201874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.249 [2024-11-06 09:09:28.202052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.249 [2024-11-06 09:09:28.204242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.249 "name": "Existed_Raid", 00:16:29.249 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:29.249 "strip_size_kb": 64, 00:16:29.249 "state": "configuring", 00:16:29.249 "raid_level": "concat", 00:16:29.249 "superblock": true, 00:16:29.249 "num_base_bdevs": 3, 00:16:29.249 "num_base_bdevs_discovered": 2, 00:16:29.249 "num_base_bdevs_operational": 3, 00:16:29.249 "base_bdevs_list": [ 00:16:29.249 { 00:16:29.249 "name": "BaseBdev1", 00:16:29.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.249 "is_configured": false, 00:16:29.249 "data_offset": 0, 00:16:29.249 "data_size": 0 00:16:29.249 }, 00:16:29.249 { 00:16:29.249 "name": "BaseBdev2", 00:16:29.249 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:29.249 "is_configured": true, 00:16:29.249 "data_offset": 2048, 00:16:29.249 "data_size": 63488 00:16:29.249 }, 00:16:29.249 { 00:16:29.249 "name": "BaseBdev3", 00:16:29.249 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:29.249 "is_configured": true, 00:16:29.249 "data_offset": 2048, 00:16:29.249 "data_size": 63488 00:16:29.249 } 00:16:29.249 ] 00:16:29.249 }' 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.249 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.816 [2024-11-06 09:09:28.680934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.816 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.816 "name": "Existed_Raid", 00:16:29.816 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:29.816 "strip_size_kb": 64, 00:16:29.816 "state": "configuring", 00:16:29.816 "raid_level": "concat", 00:16:29.816 "superblock": true, 00:16:29.816 "num_base_bdevs": 3, 00:16:29.816 "num_base_bdevs_discovered": 1, 00:16:29.816 "num_base_bdevs_operational": 3, 00:16:29.816 "base_bdevs_list": [ 00:16:29.816 { 00:16:29.816 "name": "BaseBdev1", 00:16:29.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.816 "is_configured": false, 00:16:29.816 "data_offset": 0, 00:16:29.816 "data_size": 0 00:16:29.816 }, 00:16:29.816 { 00:16:29.816 "name": null, 00:16:29.816 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:29.816 "is_configured": false, 00:16:29.816 "data_offset": 0, 00:16:29.816 "data_size": 63488 00:16:29.816 }, 00:16:29.816 { 00:16:29.817 "name": "BaseBdev3", 00:16:29.817 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:29.817 "is_configured": true, 00:16:29.817 "data_offset": 2048, 00:16:29.817 "data_size": 63488 00:16:29.817 } 00:16:29.817 ] 00:16:29.817 }' 00:16:29.817 09:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.817 09:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.422 [2024-11-06 09:09:29.222792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.422 BaseBdev1 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.422 [ 00:16:30.422 { 00:16:30.422 "name": "BaseBdev1", 00:16:30.422 "aliases": [ 00:16:30.422 "b07f3ab4-c3b6-4e11-b27f-63797f19221f" 00:16:30.422 ], 00:16:30.422 "product_name": "Malloc disk", 00:16:30.422 "block_size": 512, 00:16:30.422 "num_blocks": 65536, 00:16:30.422 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:30.422 "assigned_rate_limits": { 00:16:30.422 "rw_ios_per_sec": 0, 00:16:30.422 "rw_mbytes_per_sec": 0, 00:16:30.422 "r_mbytes_per_sec": 0, 00:16:30.422 "w_mbytes_per_sec": 0 00:16:30.422 }, 00:16:30.422 "claimed": true, 00:16:30.422 "claim_type": "exclusive_write", 00:16:30.422 "zoned": false, 00:16:30.422 "supported_io_types": { 00:16:30.422 "read": true, 00:16:30.422 "write": true, 00:16:30.422 "unmap": true, 00:16:30.422 "flush": true, 00:16:30.422 "reset": true, 00:16:30.422 "nvme_admin": false, 00:16:30.422 "nvme_io": false, 00:16:30.422 "nvme_io_md": false, 00:16:30.422 "write_zeroes": true, 00:16:30.422 "zcopy": true, 00:16:30.422 "get_zone_info": false, 00:16:30.422 "zone_management": false, 00:16:30.422 "zone_append": false, 00:16:30.422 "compare": false, 00:16:30.422 "compare_and_write": false, 00:16:30.422 "abort": true, 00:16:30.422 "seek_hole": false, 00:16:30.422 "seek_data": false, 00:16:30.422 "copy": true, 00:16:30.422 "nvme_iov_md": false 00:16:30.422 }, 00:16:30.422 "memory_domains": [ 00:16:30.422 { 00:16:30.422 "dma_device_id": "system", 00:16:30.422 "dma_device_type": 1 00:16:30.422 }, 00:16:30.422 { 00:16:30.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.422 "dma_device_type": 2 00:16:30.422 } 00:16:30.422 ], 00:16:30.422 "driver_specific": {} 00:16:30.422 } 00:16:30.422 ] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.422 "name": "Existed_Raid", 00:16:30.422 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:30.422 "strip_size_kb": 64, 00:16:30.422 "state": "configuring", 00:16:30.422 "raid_level": "concat", 00:16:30.422 "superblock": true, 00:16:30.422 "num_base_bdevs": 3, 00:16:30.422 "num_base_bdevs_discovered": 2, 00:16:30.422 "num_base_bdevs_operational": 3, 00:16:30.422 "base_bdevs_list": [ 00:16:30.422 { 00:16:30.422 "name": "BaseBdev1", 00:16:30.422 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:30.422 "is_configured": true, 00:16:30.422 "data_offset": 2048, 00:16:30.422 "data_size": 63488 00:16:30.422 }, 00:16:30.422 { 00:16:30.422 "name": null, 00:16:30.422 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:30.422 "is_configured": false, 00:16:30.422 "data_offset": 0, 00:16:30.422 "data_size": 63488 00:16:30.422 }, 00:16:30.422 { 00:16:30.422 "name": "BaseBdev3", 00:16:30.422 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:30.422 "is_configured": true, 00:16:30.422 "data_offset": 2048, 00:16:30.422 "data_size": 63488 00:16:30.422 } 00:16:30.422 ] 00:16:30.422 }' 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.422 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.688 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.688 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.688 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.688 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.948 [2024-11-06 09:09:29.746309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.948 "name": "Existed_Raid", 00:16:30.948 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:30.948 "strip_size_kb": 64, 00:16:30.948 "state": "configuring", 00:16:30.948 "raid_level": "concat", 00:16:30.948 "superblock": true, 00:16:30.948 "num_base_bdevs": 3, 00:16:30.948 "num_base_bdevs_discovered": 1, 00:16:30.948 "num_base_bdevs_operational": 3, 00:16:30.948 "base_bdevs_list": [ 00:16:30.948 { 00:16:30.948 "name": "BaseBdev1", 00:16:30.948 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:30.948 "is_configured": true, 00:16:30.948 "data_offset": 2048, 00:16:30.948 "data_size": 63488 00:16:30.948 }, 00:16:30.948 { 00:16:30.948 "name": null, 00:16:30.948 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:30.948 "is_configured": false, 00:16:30.948 "data_offset": 0, 00:16:30.948 "data_size": 63488 00:16:30.948 }, 00:16:30.948 { 00:16:30.948 "name": null, 00:16:30.948 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:30.948 "is_configured": false, 00:16:30.948 "data_offset": 0, 00:16:30.948 "data_size": 63488 00:16:30.948 } 00:16:30.948 ] 00:16:30.948 }' 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.948 09:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.206 [2024-11-06 09:09:30.221713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.206 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.464 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.464 "name": "Existed_Raid", 00:16:31.464 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:31.464 "strip_size_kb": 64, 00:16:31.464 "state": "configuring", 00:16:31.464 "raid_level": "concat", 00:16:31.464 "superblock": true, 00:16:31.464 "num_base_bdevs": 3, 00:16:31.464 "num_base_bdevs_discovered": 2, 00:16:31.464 "num_base_bdevs_operational": 3, 00:16:31.464 "base_bdevs_list": [ 00:16:31.464 { 00:16:31.464 "name": "BaseBdev1", 00:16:31.464 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:31.464 "is_configured": true, 00:16:31.464 "data_offset": 2048, 00:16:31.464 "data_size": 63488 00:16:31.464 }, 00:16:31.464 { 00:16:31.464 "name": null, 00:16:31.464 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:31.464 "is_configured": false, 00:16:31.464 "data_offset": 0, 00:16:31.464 "data_size": 63488 00:16:31.464 }, 00:16:31.464 { 00:16:31.464 "name": "BaseBdev3", 00:16:31.464 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:31.465 "is_configured": true, 00:16:31.465 "data_offset": 2048, 00:16:31.465 "data_size": 63488 00:16:31.465 } 00:16:31.465 ] 00:16:31.465 }' 00:16:31.465 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.465 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.723 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.723 [2024-11-06 09:09:30.753771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.981 "name": "Existed_Raid", 00:16:31.981 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:31.981 "strip_size_kb": 64, 00:16:31.981 "state": "configuring", 00:16:31.981 "raid_level": "concat", 00:16:31.981 "superblock": true, 00:16:31.981 "num_base_bdevs": 3, 00:16:31.981 "num_base_bdevs_discovered": 1, 00:16:31.981 "num_base_bdevs_operational": 3, 00:16:31.981 "base_bdevs_list": [ 00:16:31.981 { 00:16:31.981 "name": null, 00:16:31.981 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:31.981 "is_configured": false, 00:16:31.981 "data_offset": 0, 00:16:31.981 "data_size": 63488 00:16:31.981 }, 00:16:31.981 { 00:16:31.981 "name": null, 00:16:31.981 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:31.981 "is_configured": false, 00:16:31.981 "data_offset": 0, 00:16:31.981 "data_size": 63488 00:16:31.981 }, 00:16:31.981 { 00:16:31.981 "name": "BaseBdev3", 00:16:31.981 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:31.981 "is_configured": true, 00:16:31.981 "data_offset": 2048, 00:16:31.981 "data_size": 63488 00:16:31.981 } 00:16:31.981 ] 00:16:31.981 }' 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.981 09:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.549 [2024-11-06 09:09:31.349728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.549 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.549 "name": "Existed_Raid", 00:16:32.549 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:32.550 "strip_size_kb": 64, 00:16:32.550 "state": "configuring", 00:16:32.550 "raid_level": "concat", 00:16:32.550 "superblock": true, 00:16:32.550 "num_base_bdevs": 3, 00:16:32.550 "num_base_bdevs_discovered": 2, 00:16:32.550 "num_base_bdevs_operational": 3, 00:16:32.550 "base_bdevs_list": [ 00:16:32.550 { 00:16:32.550 "name": null, 00:16:32.550 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:32.550 "is_configured": false, 00:16:32.550 "data_offset": 0, 00:16:32.550 "data_size": 63488 00:16:32.550 }, 00:16:32.550 { 00:16:32.550 "name": "BaseBdev2", 00:16:32.550 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:32.550 "is_configured": true, 00:16:32.550 "data_offset": 2048, 00:16:32.550 "data_size": 63488 00:16:32.550 }, 00:16:32.550 { 00:16:32.550 "name": "BaseBdev3", 00:16:32.550 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:32.550 "is_configured": true, 00:16:32.550 "data_offset": 2048, 00:16:32.550 "data_size": 63488 00:16:32.550 } 00:16:32.550 ] 00:16:32.550 }' 00:16:32.550 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.550 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.809 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b07f3ab4-c3b6-4e11-b27f-63797f19221f 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.068 [2024-11-06 09:09:31.920984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:33.068 [2024-11-06 09:09:31.921234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:33.068 [2024-11-06 09:09:31.921254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.068 [2024-11-06 09:09:31.921538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:33.068 [2024-11-06 09:09:31.921682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:33.068 [2024-11-06 09:09:31.921692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:33.068 NewBaseBdev 00:16:33.068 [2024-11-06 09:09:31.921834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.068 [ 00:16:33.068 { 00:16:33.068 "name": "NewBaseBdev", 00:16:33.068 "aliases": [ 00:16:33.068 "b07f3ab4-c3b6-4e11-b27f-63797f19221f" 00:16:33.068 ], 00:16:33.068 "product_name": "Malloc disk", 00:16:33.068 "block_size": 512, 00:16:33.068 "num_blocks": 65536, 00:16:33.068 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:33.068 "assigned_rate_limits": { 00:16:33.068 "rw_ios_per_sec": 0, 00:16:33.068 "rw_mbytes_per_sec": 0, 00:16:33.068 "r_mbytes_per_sec": 0, 00:16:33.068 "w_mbytes_per_sec": 0 00:16:33.068 }, 00:16:33.068 "claimed": true, 00:16:33.068 "claim_type": "exclusive_write", 00:16:33.068 "zoned": false, 00:16:33.068 "supported_io_types": { 00:16:33.068 "read": true, 00:16:33.068 "write": true, 00:16:33.068 "unmap": true, 00:16:33.068 "flush": true, 00:16:33.068 "reset": true, 00:16:33.068 "nvme_admin": false, 00:16:33.068 "nvme_io": false, 00:16:33.068 "nvme_io_md": false, 00:16:33.068 "write_zeroes": true, 00:16:33.068 "zcopy": true, 00:16:33.068 "get_zone_info": false, 00:16:33.068 "zone_management": false, 00:16:33.068 "zone_append": false, 00:16:33.068 "compare": false, 00:16:33.068 "compare_and_write": false, 00:16:33.068 "abort": true, 00:16:33.068 "seek_hole": false, 00:16:33.068 "seek_data": false, 00:16:33.068 "copy": true, 00:16:33.068 "nvme_iov_md": false 00:16:33.068 }, 00:16:33.068 "memory_domains": [ 00:16:33.068 { 00:16:33.068 "dma_device_id": "system", 00:16:33.068 "dma_device_type": 1 00:16:33.068 }, 00:16:33.068 { 00:16:33.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.068 "dma_device_type": 2 00:16:33.068 } 00:16:33.068 ], 00:16:33.068 "driver_specific": {} 00:16:33.068 } 00:16:33.068 ] 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.068 09:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.068 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.068 "name": "Existed_Raid", 00:16:33.068 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:33.068 "strip_size_kb": 64, 00:16:33.068 "state": "online", 00:16:33.068 "raid_level": "concat", 00:16:33.068 "superblock": true, 00:16:33.068 "num_base_bdevs": 3, 00:16:33.068 "num_base_bdevs_discovered": 3, 00:16:33.068 "num_base_bdevs_operational": 3, 00:16:33.068 "base_bdevs_list": [ 00:16:33.068 { 00:16:33.068 "name": "NewBaseBdev", 00:16:33.068 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:33.068 "is_configured": true, 00:16:33.068 "data_offset": 2048, 00:16:33.068 "data_size": 63488 00:16:33.068 }, 00:16:33.068 { 00:16:33.068 "name": "BaseBdev2", 00:16:33.068 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:33.068 "is_configured": true, 00:16:33.068 "data_offset": 2048, 00:16:33.068 "data_size": 63488 00:16:33.068 }, 00:16:33.068 { 00:16:33.068 "name": "BaseBdev3", 00:16:33.068 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:33.068 "is_configured": true, 00:16:33.068 "data_offset": 2048, 00:16:33.068 "data_size": 63488 00:16:33.068 } 00:16:33.068 ] 00:16:33.068 }' 00:16:33.068 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.068 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.636 [2024-11-06 09:09:32.436629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.636 "name": "Existed_Raid", 00:16:33.636 "aliases": [ 00:16:33.636 "40150eb3-5195-46f6-beda-ef13947104b9" 00:16:33.636 ], 00:16:33.636 "product_name": "Raid Volume", 00:16:33.636 "block_size": 512, 00:16:33.636 "num_blocks": 190464, 00:16:33.636 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:33.636 "assigned_rate_limits": { 00:16:33.636 "rw_ios_per_sec": 0, 00:16:33.636 "rw_mbytes_per_sec": 0, 00:16:33.636 "r_mbytes_per_sec": 0, 00:16:33.636 "w_mbytes_per_sec": 0 00:16:33.636 }, 00:16:33.636 "claimed": false, 00:16:33.636 "zoned": false, 00:16:33.636 "supported_io_types": { 00:16:33.636 "read": true, 00:16:33.636 "write": true, 00:16:33.636 "unmap": true, 00:16:33.636 "flush": true, 00:16:33.636 "reset": true, 00:16:33.636 "nvme_admin": false, 00:16:33.636 "nvme_io": false, 00:16:33.636 "nvme_io_md": false, 00:16:33.636 "write_zeroes": true, 00:16:33.636 "zcopy": false, 00:16:33.636 "get_zone_info": false, 00:16:33.636 "zone_management": false, 00:16:33.636 "zone_append": false, 00:16:33.636 "compare": false, 00:16:33.636 "compare_and_write": false, 00:16:33.636 "abort": false, 00:16:33.636 "seek_hole": false, 00:16:33.636 "seek_data": false, 00:16:33.636 "copy": false, 00:16:33.636 "nvme_iov_md": false 00:16:33.636 }, 00:16:33.636 "memory_domains": [ 00:16:33.636 { 00:16:33.636 "dma_device_id": "system", 00:16:33.636 "dma_device_type": 1 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.636 "dma_device_type": 2 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "dma_device_id": "system", 00:16:33.636 "dma_device_type": 1 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.636 "dma_device_type": 2 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "dma_device_id": "system", 00:16:33.636 "dma_device_type": 1 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.636 "dma_device_type": 2 00:16:33.636 } 00:16:33.636 ], 00:16:33.636 "driver_specific": { 00:16:33.636 "raid": { 00:16:33.636 "uuid": "40150eb3-5195-46f6-beda-ef13947104b9", 00:16:33.636 "strip_size_kb": 64, 00:16:33.636 "state": "online", 00:16:33.636 "raid_level": "concat", 00:16:33.636 "superblock": true, 00:16:33.636 "num_base_bdevs": 3, 00:16:33.636 "num_base_bdevs_discovered": 3, 00:16:33.636 "num_base_bdevs_operational": 3, 00:16:33.636 "base_bdevs_list": [ 00:16:33.636 { 00:16:33.636 "name": "NewBaseBdev", 00:16:33.636 "uuid": "b07f3ab4-c3b6-4e11-b27f-63797f19221f", 00:16:33.636 "is_configured": true, 00:16:33.636 "data_offset": 2048, 00:16:33.636 "data_size": 63488 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "name": "BaseBdev2", 00:16:33.636 "uuid": "35235991-c76d-459f-a866-39d2e7b55955", 00:16:33.636 "is_configured": true, 00:16:33.636 "data_offset": 2048, 00:16:33.636 "data_size": 63488 00:16:33.636 }, 00:16:33.636 { 00:16:33.636 "name": "BaseBdev3", 00:16:33.636 "uuid": "1a8c4188-1fc2-4118-bf0a-fd3686b6b4ef", 00:16:33.636 "is_configured": true, 00:16:33.636 "data_offset": 2048, 00:16:33.636 "data_size": 63488 00:16:33.636 } 00:16:33.636 ] 00:16:33.636 } 00:16:33.636 } 00:16:33.636 }' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.636 BaseBdev2 00:16:33.636 BaseBdev3' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.636 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.895 [2024-11-06 09:09:32.715922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.895 [2024-11-06 09:09:32.716105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.895 [2024-11-06 09:09:32.716213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.895 [2024-11-06 09:09:32.716302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.895 [2024-11-06 09:09:32.716326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66016 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66016 ']' 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66016 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66016 00:16:33.895 killing process with pid 66016 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66016' 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66016 00:16:33.895 09:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66016 00:16:33.895 [2024-11-06 09:09:32.766865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.154 [2024-11-06 09:09:33.075780] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.531 09:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:35.531 00:16:35.531 real 0m10.716s 00:16:35.531 user 0m16.924s 00:16:35.531 sys 0m2.272s 00:16:35.531 09:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:35.531 09:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 ************************************ 00:16:35.531 END TEST raid_state_function_test_sb 00:16:35.531 ************************************ 00:16:35.531 09:09:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:35.531 09:09:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:35.531 09:09:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:35.531 09:09:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 ************************************ 00:16:35.531 START TEST raid_superblock_test 00:16:35.531 ************************************ 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66642 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66642 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66642 ']' 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.531 09:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 [2024-11-06 09:09:34.418744] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:35.531 [2024-11-06 09:09:34.419170] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66642 ] 00:16:35.790 [2024-11-06 09:09:34.612078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.790 [2024-11-06 09:09:34.747074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.048 [2024-11-06 09:09:34.974622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.048 [2024-11-06 09:09:34.974834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.307 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 malloc1 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 [2024-11-06 09:09:35.354097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:36.566 [2024-11-06 09:09:35.354195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.566 [2024-11-06 09:09:35.354229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:36.566 [2024-11-06 09:09:35.354243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.566 [2024-11-06 09:09:35.356979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.566 [2024-11-06 09:09:35.357043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:36.566 pt1 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 malloc2 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 [2024-11-06 09:09:35.420102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.566 [2024-11-06 09:09:35.420295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.566 [2024-11-06 09:09:35.420367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:36.566 [2024-11-06 09:09:35.420502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.566 [2024-11-06 09:09:35.423366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.566 [2024-11-06 09:09:35.423528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.566 pt2 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 malloc3 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.566 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 [2024-11-06 09:09:35.495367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:36.566 [2024-11-06 09:09:35.495453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.566 [2024-11-06 09:09:35.495491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:36.567 [2024-11-06 09:09:35.495511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.567 [2024-11-06 09:09:35.498831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.567 [2024-11-06 09:09:35.499024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:36.567 pt3 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.567 [2024-11-06 09:09:35.507424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:36.567 [2024-11-06 09:09:35.509747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.567 [2024-11-06 09:09:35.509971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:36.567 [2024-11-06 09:09:35.510178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:36.567 [2024-11-06 09:09:35.510199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:36.567 [2024-11-06 09:09:35.510526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:36.567 [2024-11-06 09:09:35.510728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:36.567 [2024-11-06 09:09:35.510742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:36.567 [2024-11-06 09:09:35.510922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.567 "name": "raid_bdev1", 00:16:36.567 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:36.567 "strip_size_kb": 64, 00:16:36.567 "state": "online", 00:16:36.567 "raid_level": "concat", 00:16:36.567 "superblock": true, 00:16:36.567 "num_base_bdevs": 3, 00:16:36.567 "num_base_bdevs_discovered": 3, 00:16:36.567 "num_base_bdevs_operational": 3, 00:16:36.567 "base_bdevs_list": [ 00:16:36.567 { 00:16:36.567 "name": "pt1", 00:16:36.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.567 "is_configured": true, 00:16:36.567 "data_offset": 2048, 00:16:36.567 "data_size": 63488 00:16:36.567 }, 00:16:36.567 { 00:16:36.567 "name": "pt2", 00:16:36.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.567 "is_configured": true, 00:16:36.567 "data_offset": 2048, 00:16:36.567 "data_size": 63488 00:16:36.567 }, 00:16:36.567 { 00:16:36.567 "name": "pt3", 00:16:36.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.567 "is_configured": true, 00:16:36.567 "data_offset": 2048, 00:16:36.567 "data_size": 63488 00:16:36.567 } 00:16:36.567 ] 00:16:36.567 }' 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.567 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.135 09:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.135 [2024-11-06 09:09:35.975077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.135 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.135 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:37.135 "name": "raid_bdev1", 00:16:37.135 "aliases": [ 00:16:37.135 "4e3ddc2b-9224-4995-acef-7287adbac9a6" 00:16:37.135 ], 00:16:37.135 "product_name": "Raid Volume", 00:16:37.135 "block_size": 512, 00:16:37.135 "num_blocks": 190464, 00:16:37.135 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:37.135 "assigned_rate_limits": { 00:16:37.135 "rw_ios_per_sec": 0, 00:16:37.135 "rw_mbytes_per_sec": 0, 00:16:37.135 "r_mbytes_per_sec": 0, 00:16:37.135 "w_mbytes_per_sec": 0 00:16:37.135 }, 00:16:37.135 "claimed": false, 00:16:37.135 "zoned": false, 00:16:37.135 "supported_io_types": { 00:16:37.135 "read": true, 00:16:37.135 "write": true, 00:16:37.135 "unmap": true, 00:16:37.135 "flush": true, 00:16:37.135 "reset": true, 00:16:37.135 "nvme_admin": false, 00:16:37.135 "nvme_io": false, 00:16:37.135 "nvme_io_md": false, 00:16:37.135 "write_zeroes": true, 00:16:37.135 "zcopy": false, 00:16:37.135 "get_zone_info": false, 00:16:37.135 "zone_management": false, 00:16:37.135 "zone_append": false, 00:16:37.135 "compare": false, 00:16:37.135 "compare_and_write": false, 00:16:37.135 "abort": false, 00:16:37.135 "seek_hole": false, 00:16:37.135 "seek_data": false, 00:16:37.135 "copy": false, 00:16:37.135 "nvme_iov_md": false 00:16:37.135 }, 00:16:37.135 "memory_domains": [ 00:16:37.135 { 00:16:37.135 "dma_device_id": "system", 00:16:37.135 "dma_device_type": 1 00:16:37.135 }, 00:16:37.135 { 00:16:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.136 "dma_device_type": 2 00:16:37.136 }, 00:16:37.136 { 00:16:37.136 "dma_device_id": "system", 00:16:37.136 "dma_device_type": 1 00:16:37.136 }, 00:16:37.136 { 00:16:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.136 "dma_device_type": 2 00:16:37.136 }, 00:16:37.136 { 00:16:37.136 "dma_device_id": "system", 00:16:37.136 "dma_device_type": 1 00:16:37.136 }, 00:16:37.136 { 00:16:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.136 "dma_device_type": 2 00:16:37.136 } 00:16:37.136 ], 00:16:37.136 "driver_specific": { 00:16:37.136 "raid": { 00:16:37.136 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:37.136 "strip_size_kb": 64, 00:16:37.136 "state": "online", 00:16:37.136 "raid_level": "concat", 00:16:37.136 "superblock": true, 00:16:37.136 "num_base_bdevs": 3, 00:16:37.136 "num_base_bdevs_discovered": 3, 00:16:37.136 "num_base_bdevs_operational": 3, 00:16:37.136 "base_bdevs_list": [ 00:16:37.136 { 00:16:37.136 "name": "pt1", 00:16:37.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.136 "is_configured": true, 00:16:37.136 "data_offset": 2048, 00:16:37.136 "data_size": 63488 00:16:37.136 }, 00:16:37.136 { 00:16:37.136 "name": "pt2", 00:16:37.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.136 "is_configured": true, 00:16:37.136 "data_offset": 2048, 00:16:37.136 "data_size": 63488 00:16:37.136 }, 00:16:37.136 { 00:16:37.136 "name": "pt3", 00:16:37.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.136 "is_configured": true, 00:16:37.136 "data_offset": 2048, 00:16:37.136 "data_size": 63488 00:16:37.136 } 00:16:37.136 ] 00:16:37.136 } 00:16:37.136 } 00:16:37.136 }' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:37.136 pt2 00:16:37.136 pt3' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 [2024-11-06 09:09:36.238675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4e3ddc2b-9224-4995-acef-7287adbac9a6 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4e3ddc2b-9224-4995-acef-7287adbac9a6 ']' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 [2024-11-06 09:09:36.286305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.395 [2024-11-06 09:09:36.286338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.395 [2024-11-06 09:09:36.286428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.395 [2024-11-06 09:09:36.286499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.395 [2024-11-06 09:09:36.286512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:37.395 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.396 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.654 [2024-11-06 09:09:36.434176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:37.654 [2024-11-06 09:09:36.436522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:37.654 [2024-11-06 09:09:36.436579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:37.654 [2024-11-06 09:09:36.436633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:37.654 [2024-11-06 09:09:36.436703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:37.654 [2024-11-06 09:09:36.436727] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:37.654 [2024-11-06 09:09:36.436749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.654 [2024-11-06 09:09:36.436760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:37.654 request: 00:16:37.654 { 00:16:37.654 "name": "raid_bdev1", 00:16:37.654 "raid_level": "concat", 00:16:37.654 "base_bdevs": [ 00:16:37.654 "malloc1", 00:16:37.655 "malloc2", 00:16:37.655 "malloc3" 00:16:37.655 ], 00:16:37.655 "strip_size_kb": 64, 00:16:37.655 "superblock": false, 00:16:37.655 "method": "bdev_raid_create", 00:16:37.655 "req_id": 1 00:16:37.655 } 00:16:37.655 Got JSON-RPC error response 00:16:37.655 response: 00:16:37.655 { 00:16:37.655 "code": -17, 00:16:37.655 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:37.655 } 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.655 [2024-11-06 09:09:36.501972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.655 [2024-11-06 09:09:36.502191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.655 [2024-11-06 09:09:36.502227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:37.655 [2024-11-06 09:09:36.502240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.655 [2024-11-06 09:09:36.504883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.655 [2024-11-06 09:09:36.504937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.655 [2024-11-06 09:09:36.505030] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:37.655 [2024-11-06 09:09:36.505089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.655 pt1 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.655 "name": "raid_bdev1", 00:16:37.655 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:37.655 "strip_size_kb": 64, 00:16:37.655 "state": "configuring", 00:16:37.655 "raid_level": "concat", 00:16:37.655 "superblock": true, 00:16:37.655 "num_base_bdevs": 3, 00:16:37.655 "num_base_bdevs_discovered": 1, 00:16:37.655 "num_base_bdevs_operational": 3, 00:16:37.655 "base_bdevs_list": [ 00:16:37.655 { 00:16:37.655 "name": "pt1", 00:16:37.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.655 "is_configured": true, 00:16:37.655 "data_offset": 2048, 00:16:37.655 "data_size": 63488 00:16:37.655 }, 00:16:37.655 { 00:16:37.655 "name": null, 00:16:37.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.655 "is_configured": false, 00:16:37.655 "data_offset": 2048, 00:16:37.655 "data_size": 63488 00:16:37.655 }, 00:16:37.655 { 00:16:37.655 "name": null, 00:16:37.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.655 "is_configured": false, 00:16:37.655 "data_offset": 2048, 00:16:37.655 "data_size": 63488 00:16:37.655 } 00:16:37.655 ] 00:16:37.655 }' 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.655 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 [2024-11-06 09:09:36.905723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.914 [2024-11-06 09:09:36.905971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.914 [2024-11-06 09:09:36.906009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:37.914 [2024-11-06 09:09:36.906022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.914 [2024-11-06 09:09:36.906511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.914 [2024-11-06 09:09:36.906533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.914 [2024-11-06 09:09:36.906622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.914 [2024-11-06 09:09:36.906647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.914 pt2 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 [2024-11-06 09:09:36.913765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.914 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.915 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.915 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.915 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.346 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.346 "name": "raid_bdev1", 00:16:38.346 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:38.346 "strip_size_kb": 64, 00:16:38.346 "state": "configuring", 00:16:38.346 "raid_level": "concat", 00:16:38.346 "superblock": true, 00:16:38.346 "num_base_bdevs": 3, 00:16:38.346 "num_base_bdevs_discovered": 1, 00:16:38.346 "num_base_bdevs_operational": 3, 00:16:38.346 "base_bdevs_list": [ 00:16:38.346 { 00:16:38.346 "name": "pt1", 00:16:38.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.346 "is_configured": true, 00:16:38.346 "data_offset": 2048, 00:16:38.346 "data_size": 63488 00:16:38.346 }, 00:16:38.346 { 00:16:38.346 "name": null, 00:16:38.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.346 "is_configured": false, 00:16:38.346 "data_offset": 0, 00:16:38.346 "data_size": 63488 00:16:38.346 }, 00:16:38.346 { 00:16:38.346 "name": null, 00:16:38.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.346 "is_configured": false, 00:16:38.346 "data_offset": 2048, 00:16:38.346 "data_size": 63488 00:16:38.346 } 00:16:38.346 ] 00:16:38.346 }' 00:16:38.346 09:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.346 09:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.346 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:38.346 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.346 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.346 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.346 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.346 [2024-11-06 09:09:37.365708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.346 [2024-11-06 09:09:37.365793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.346 [2024-11-06 09:09:37.365816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:38.346 [2024-11-06 09:09:37.365831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.346 [2024-11-06 09:09:37.366321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.346 [2024-11-06 09:09:37.366346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.346 [2024-11-06 09:09:37.366432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:38.346 [2024-11-06 09:09:37.366459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.606 pt2 00:16:38.606 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.606 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:38.606 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.606 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.606 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.606 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.606 [2024-11-06 09:09:37.377702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.606 [2024-11-06 09:09:37.377898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.606 [2024-11-06 09:09:37.377924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:38.606 [2024-11-06 09:09:37.377938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.606 [2024-11-06 09:09:37.378357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.606 [2024-11-06 09:09:37.378384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.606 [2024-11-06 09:09:37.378452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:38.606 [2024-11-06 09:09:37.378475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.606 [2024-11-06 09:09:37.378591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.606 [2024-11-06 09:09:37.378605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.606 [2024-11-06 09:09:37.378861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:38.607 [2024-11-06 09:09:37.378991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.607 [2024-11-06 09:09:37.379000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:38.607 [2024-11-06 09:09:37.379134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.607 pt3 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.607 "name": "raid_bdev1", 00:16:38.607 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:38.607 "strip_size_kb": 64, 00:16:38.607 "state": "online", 00:16:38.607 "raid_level": "concat", 00:16:38.607 "superblock": true, 00:16:38.607 "num_base_bdevs": 3, 00:16:38.607 "num_base_bdevs_discovered": 3, 00:16:38.607 "num_base_bdevs_operational": 3, 00:16:38.607 "base_bdevs_list": [ 00:16:38.607 { 00:16:38.607 "name": "pt1", 00:16:38.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.607 "is_configured": true, 00:16:38.607 "data_offset": 2048, 00:16:38.607 "data_size": 63488 00:16:38.607 }, 00:16:38.607 { 00:16:38.607 "name": "pt2", 00:16:38.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.607 "is_configured": true, 00:16:38.607 "data_offset": 2048, 00:16:38.607 "data_size": 63488 00:16:38.607 }, 00:16:38.607 { 00:16:38.607 "name": "pt3", 00:16:38.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.607 "is_configured": true, 00:16:38.607 "data_offset": 2048, 00:16:38.607 "data_size": 63488 00:16:38.607 } 00:16:38.607 ] 00:16:38.607 }' 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.607 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.866 [2024-11-06 09:09:37.850015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.866 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.866 "name": "raid_bdev1", 00:16:38.866 "aliases": [ 00:16:38.866 "4e3ddc2b-9224-4995-acef-7287adbac9a6" 00:16:38.866 ], 00:16:38.866 "product_name": "Raid Volume", 00:16:38.866 "block_size": 512, 00:16:38.866 "num_blocks": 190464, 00:16:38.866 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:38.866 "assigned_rate_limits": { 00:16:38.866 "rw_ios_per_sec": 0, 00:16:38.866 "rw_mbytes_per_sec": 0, 00:16:38.866 "r_mbytes_per_sec": 0, 00:16:38.866 "w_mbytes_per_sec": 0 00:16:38.866 }, 00:16:38.866 "claimed": false, 00:16:38.866 "zoned": false, 00:16:38.866 "supported_io_types": { 00:16:38.866 "read": true, 00:16:38.866 "write": true, 00:16:38.866 "unmap": true, 00:16:38.866 "flush": true, 00:16:38.866 "reset": true, 00:16:38.866 "nvme_admin": false, 00:16:38.866 "nvme_io": false, 00:16:38.866 "nvme_io_md": false, 00:16:38.866 "write_zeroes": true, 00:16:38.866 "zcopy": false, 00:16:38.866 "get_zone_info": false, 00:16:38.866 "zone_management": false, 00:16:38.866 "zone_append": false, 00:16:38.866 "compare": false, 00:16:38.866 "compare_and_write": false, 00:16:38.866 "abort": false, 00:16:38.866 "seek_hole": false, 00:16:38.866 "seek_data": false, 00:16:38.866 "copy": false, 00:16:38.866 "nvme_iov_md": false 00:16:38.866 }, 00:16:38.866 "memory_domains": [ 00:16:38.866 { 00:16:38.866 "dma_device_id": "system", 00:16:38.866 "dma_device_type": 1 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.866 "dma_device_type": 2 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "dma_device_id": "system", 00:16:38.866 "dma_device_type": 1 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.866 "dma_device_type": 2 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "dma_device_id": "system", 00:16:38.866 "dma_device_type": 1 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.866 "dma_device_type": 2 00:16:38.866 } 00:16:38.866 ], 00:16:38.866 "driver_specific": { 00:16:38.866 "raid": { 00:16:38.866 "uuid": "4e3ddc2b-9224-4995-acef-7287adbac9a6", 00:16:38.866 "strip_size_kb": 64, 00:16:38.866 "state": "online", 00:16:38.866 "raid_level": "concat", 00:16:38.866 "superblock": true, 00:16:38.866 "num_base_bdevs": 3, 00:16:38.866 "num_base_bdevs_discovered": 3, 00:16:38.866 "num_base_bdevs_operational": 3, 00:16:38.866 "base_bdevs_list": [ 00:16:38.866 { 00:16:38.866 "name": "pt1", 00:16:38.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.866 "is_configured": true, 00:16:38.866 "data_offset": 2048, 00:16:38.866 "data_size": 63488 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "name": "pt2", 00:16:38.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.866 "is_configured": true, 00:16:38.866 "data_offset": 2048, 00:16:38.866 "data_size": 63488 00:16:38.866 }, 00:16:38.866 { 00:16:38.866 "name": "pt3", 00:16:38.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.866 "is_configured": true, 00:16:38.866 "data_offset": 2048, 00:16:38.866 "data_size": 63488 00:16:38.866 } 00:16:38.866 ] 00:16:38.866 } 00:16:38.866 } 00:16:38.866 }' 00:16:38.867 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.125 pt2 00:16:39.125 pt3' 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.125 09:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.125 [2024-11-06 09:09:38.093976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4e3ddc2b-9224-4995-acef-7287adbac9a6 '!=' 4e3ddc2b-9224-4995-acef-7287adbac9a6 ']' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66642 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66642 ']' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66642 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:39.125 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66642 00:16:39.383 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:39.383 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:39.383 killing process with pid 66642 00:16:39.383 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66642' 00:16:39.383 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66642 00:16:39.383 09:09:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66642 00:16:39.383 [2024-11-06 09:09:38.169330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.383 [2024-11-06 09:09:38.169436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.383 [2024-11-06 09:09:38.169496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.383 [2024-11-06 09:09:38.169510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:39.641 [2024-11-06 09:09:38.482947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.017 09:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:41.017 00:16:41.017 real 0m5.336s 00:16:41.017 user 0m7.631s 00:16:41.017 sys 0m1.030s 00:16:41.017 09:09:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:41.017 ************************************ 00:16:41.017 END TEST raid_superblock_test 00:16:41.017 ************************************ 00:16:41.017 09:09:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.017 09:09:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:16:41.017 09:09:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:41.017 09:09:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:41.017 09:09:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.017 ************************************ 00:16:41.017 START TEST raid_read_error_test 00:16:41.017 ************************************ 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DqXV0OPgfx 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66895 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66895 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 66895 ']' 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:41.017 09:09:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.017 [2024-11-06 09:09:39.834054] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:41.017 [2024-11-06 09:09:39.834188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66895 ] 00:16:41.017 [2024-11-06 09:09:40.002293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.275 [2024-11-06 09:09:40.135974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.534 [2024-11-06 09:09:40.360386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.534 [2024-11-06 09:09:40.360425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.793 BaseBdev1_malloc 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.793 true 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.793 [2024-11-06 09:09:40.764985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:41.793 [2024-11-06 09:09:40.765050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.793 [2024-11-06 09:09:40.765073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:41.793 [2024-11-06 09:09:40.765087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.793 [2024-11-06 09:09:40.767650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.793 [2024-11-06 09:09:40.767696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:41.793 BaseBdev1 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.793 BaseBdev2_malloc 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.793 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.052 true 00:16:42.052 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.052 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:42.052 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.052 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.053 [2024-11-06 09:09:40.837459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:42.053 [2024-11-06 09:09:40.837703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.053 [2024-11-06 09:09:40.837738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:42.053 [2024-11-06 09:09:40.837756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.053 [2024-11-06 09:09:40.840747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.053 [2024-11-06 09:09:40.840941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.053 BaseBdev2 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.053 BaseBdev3_malloc 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.053 true 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.053 [2024-11-06 09:09:40.917555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:42.053 [2024-11-06 09:09:40.917643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.053 [2024-11-06 09:09:40.917678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:42.053 [2024-11-06 09:09:40.917702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.053 [2024-11-06 09:09:40.920564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.053 [2024-11-06 09:09:40.920616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:42.053 BaseBdev3 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.053 [2024-11-06 09:09:40.929636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.053 [2024-11-06 09:09:40.932076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.053 [2024-11-06 09:09:40.932179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.053 [2024-11-06 09:09:40.932468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:42.053 [2024-11-06 09:09:40.932484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:42.053 [2024-11-06 09:09:40.932799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:42.053 [2024-11-06 09:09:40.933095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:42.053 [2024-11-06 09:09:40.933118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:42.053 [2024-11-06 09:09:40.933307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.053 "name": "raid_bdev1", 00:16:42.053 "uuid": "e2befc05-d4ab-42b4-af9f-d7dd4b5262d6", 00:16:42.053 "strip_size_kb": 64, 00:16:42.053 "state": "online", 00:16:42.053 "raid_level": "concat", 00:16:42.053 "superblock": true, 00:16:42.053 "num_base_bdevs": 3, 00:16:42.053 "num_base_bdevs_discovered": 3, 00:16:42.053 "num_base_bdevs_operational": 3, 00:16:42.053 "base_bdevs_list": [ 00:16:42.053 { 00:16:42.053 "name": "BaseBdev1", 00:16:42.053 "uuid": "f2fdfcb9-ff34-5691-a698-8ab532b80beb", 00:16:42.053 "is_configured": true, 00:16:42.053 "data_offset": 2048, 00:16:42.053 "data_size": 63488 00:16:42.053 }, 00:16:42.053 { 00:16:42.053 "name": "BaseBdev2", 00:16:42.053 "uuid": "f88246d2-3749-5d21-95b7-af68ef852618", 00:16:42.053 "is_configured": true, 00:16:42.053 "data_offset": 2048, 00:16:42.053 "data_size": 63488 00:16:42.053 }, 00:16:42.053 { 00:16:42.053 "name": "BaseBdev3", 00:16:42.053 "uuid": "8dcdcb03-b954-5c00-ac92-392c03d1d7ba", 00:16:42.053 "is_configured": true, 00:16:42.053 "data_offset": 2048, 00:16:42.053 "data_size": 63488 00:16:42.053 } 00:16:42.053 ] 00:16:42.053 }' 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.053 09:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.634 09:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:42.634 09:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:42.634 [2024-11-06 09:09:41.482364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.570 "name": "raid_bdev1", 00:16:43.570 "uuid": "e2befc05-d4ab-42b4-af9f-d7dd4b5262d6", 00:16:43.570 "strip_size_kb": 64, 00:16:43.570 "state": "online", 00:16:43.570 "raid_level": "concat", 00:16:43.570 "superblock": true, 00:16:43.570 "num_base_bdevs": 3, 00:16:43.570 "num_base_bdevs_discovered": 3, 00:16:43.570 "num_base_bdevs_operational": 3, 00:16:43.570 "base_bdevs_list": [ 00:16:43.570 { 00:16:43.570 "name": "BaseBdev1", 00:16:43.570 "uuid": "f2fdfcb9-ff34-5691-a698-8ab532b80beb", 00:16:43.570 "is_configured": true, 00:16:43.570 "data_offset": 2048, 00:16:43.570 "data_size": 63488 00:16:43.570 }, 00:16:43.570 { 00:16:43.570 "name": "BaseBdev2", 00:16:43.570 "uuid": "f88246d2-3749-5d21-95b7-af68ef852618", 00:16:43.570 "is_configured": true, 00:16:43.570 "data_offset": 2048, 00:16:43.570 "data_size": 63488 00:16:43.570 }, 00:16:43.570 { 00:16:43.570 "name": "BaseBdev3", 00:16:43.570 "uuid": "8dcdcb03-b954-5c00-ac92-392c03d1d7ba", 00:16:43.570 "is_configured": true, 00:16:43.570 "data_offset": 2048, 00:16:43.570 "data_size": 63488 00:16:43.570 } 00:16:43.570 ] 00:16:43.570 }' 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.570 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.829 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.830 [2024-11-06 09:09:42.801030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.830 [2024-11-06 09:09:42.801218] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.830 [2024-11-06 09:09:42.804064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.830 [2024-11-06 09:09:42.804105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.830 [2024-11-06 09:09:42.804144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.830 [2024-11-06 09:09:42.804158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:43.830 { 00:16:43.830 "results": [ 00:16:43.830 { 00:16:43.830 "job": "raid_bdev1", 00:16:43.830 "core_mask": "0x1", 00:16:43.830 "workload": "randrw", 00:16:43.830 "percentage": 50, 00:16:43.830 "status": "finished", 00:16:43.830 "queue_depth": 1, 00:16:43.830 "io_size": 131072, 00:16:43.830 "runtime": 1.318829, 00:16:43.830 "iops": 15829.952177272413, 00:16:43.830 "mibps": 1978.7440221590516, 00:16:43.830 "io_failed": 1, 00:16:43.830 "io_timeout": 0, 00:16:43.830 "avg_latency_us": 87.48365824635837, 00:16:43.830 "min_latency_us": 26.936546184738955, 00:16:43.830 "max_latency_us": 1408.1028112449799 00:16:43.830 } 00:16:43.830 ], 00:16:43.830 "core_count": 1 00:16:43.830 } 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66895 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 66895 ']' 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 66895 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66895 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:43.830 killing process with pid 66895 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66895' 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 66895 00:16:43.830 [2024-11-06 09:09:42.857018] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.830 09:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 66895 00:16:44.089 [2024-11-06 09:09:43.092003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DqXV0OPgfx 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:16:45.476 00:16:45.476 real 0m4.567s 00:16:45.476 user 0m5.381s 00:16:45.476 sys 0m0.656s 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:45.476 09:09:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.476 ************************************ 00:16:45.476 END TEST raid_read_error_test 00:16:45.476 ************************************ 00:16:45.476 09:09:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:16:45.476 09:09:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:45.476 09:09:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:45.476 09:09:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.476 ************************************ 00:16:45.476 START TEST raid_write_error_test 00:16:45.476 ************************************ 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SES8tTxved 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67035 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67035 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67035 ']' 00:16:45.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:45.476 09:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.476 [2024-11-06 09:09:44.473814] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:45.476 [2024-11-06 09:09:44.473932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67035 ] 00:16:45.736 [2024-11-06 09:09:44.653793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.995 [2024-11-06 09:09:44.778965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.995 [2024-11-06 09:09:44.997124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.995 [2024-11-06 09:09:44.997395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 BaseBdev1_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 true 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 [2024-11-06 09:09:45.385165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.564 [2024-11-06 09:09:45.385388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.564 [2024-11-06 09:09:45.385424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:46.564 [2024-11-06 09:09:45.385441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.564 [2024-11-06 09:09:45.388057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.564 [2024-11-06 09:09:45.388104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.564 BaseBdev1 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 BaseBdev2_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 true 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 [2024-11-06 09:09:45.455988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:46.564 [2024-11-06 09:09:45.456209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.564 [2024-11-06 09:09:45.456243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:46.564 [2024-11-06 09:09:45.456261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.564 [2024-11-06 09:09:45.459068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.564 [2024-11-06 09:09:45.459118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:46.564 BaseBdev2 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 BaseBdev3_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 true 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.564 [2024-11-06 09:09:45.537754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:46.564 [2024-11-06 09:09:45.537820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.564 [2024-11-06 09:09:45.537843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:46.564 [2024-11-06 09:09:45.537859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.564 [2024-11-06 09:09:45.540499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.564 [2024-11-06 09:09:45.540689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:46.564 BaseBdev3 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.564 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.565 [2024-11-06 09:09:45.549827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.565 [2024-11-06 09:09:45.552110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.565 [2024-11-06 09:09:45.552211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.565 [2024-11-06 09:09:45.552448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:46.565 [2024-11-06 09:09:45.552473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.565 [2024-11-06 09:09:45.552786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:46.565 [2024-11-06 09:09:45.552987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:46.565 [2024-11-06 09:09:45.553008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:46.565 [2024-11-06 09:09:45.553177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.565 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.824 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.824 "name": "raid_bdev1", 00:16:46.824 "uuid": "1f37d62b-fabc-4e85-b0bf-b10f3b958c44", 00:16:46.824 "strip_size_kb": 64, 00:16:46.824 "state": "online", 00:16:46.824 "raid_level": "concat", 00:16:46.824 "superblock": true, 00:16:46.824 "num_base_bdevs": 3, 00:16:46.824 "num_base_bdevs_discovered": 3, 00:16:46.824 "num_base_bdevs_operational": 3, 00:16:46.824 "base_bdevs_list": [ 00:16:46.824 { 00:16:46.824 "name": "BaseBdev1", 00:16:46.824 "uuid": "26195b7b-5a58-5800-b46f-a8ef894a7a1d", 00:16:46.824 "is_configured": true, 00:16:46.824 "data_offset": 2048, 00:16:46.824 "data_size": 63488 00:16:46.824 }, 00:16:46.824 { 00:16:46.824 "name": "BaseBdev2", 00:16:46.824 "uuid": "5f512fd0-aa6c-594e-a1e1-19f89a334f27", 00:16:46.824 "is_configured": true, 00:16:46.824 "data_offset": 2048, 00:16:46.824 "data_size": 63488 00:16:46.824 }, 00:16:46.824 { 00:16:46.824 "name": "BaseBdev3", 00:16:46.824 "uuid": "2dff0d00-024f-59eb-b9fc-84a9d87ea7d1", 00:16:46.824 "is_configured": true, 00:16:46.824 "data_offset": 2048, 00:16:46.824 "data_size": 63488 00:16:46.824 } 00:16:46.824 ] 00:16:46.824 }' 00:16:46.824 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.824 09:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.083 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:47.083 09:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:47.083 [2024-11-06 09:09:46.095335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.020 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.278 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.278 "name": "raid_bdev1", 00:16:48.278 "uuid": "1f37d62b-fabc-4e85-b0bf-b10f3b958c44", 00:16:48.278 "strip_size_kb": 64, 00:16:48.278 "state": "online", 00:16:48.278 "raid_level": "concat", 00:16:48.278 "superblock": true, 00:16:48.278 "num_base_bdevs": 3, 00:16:48.278 "num_base_bdevs_discovered": 3, 00:16:48.278 "num_base_bdevs_operational": 3, 00:16:48.278 "base_bdevs_list": [ 00:16:48.278 { 00:16:48.278 "name": "BaseBdev1", 00:16:48.278 "uuid": "26195b7b-5a58-5800-b46f-a8ef894a7a1d", 00:16:48.278 "is_configured": true, 00:16:48.278 "data_offset": 2048, 00:16:48.278 "data_size": 63488 00:16:48.278 }, 00:16:48.278 { 00:16:48.278 "name": "BaseBdev2", 00:16:48.278 "uuid": "5f512fd0-aa6c-594e-a1e1-19f89a334f27", 00:16:48.278 "is_configured": true, 00:16:48.278 "data_offset": 2048, 00:16:48.279 "data_size": 63488 00:16:48.279 }, 00:16:48.279 { 00:16:48.279 "name": "BaseBdev3", 00:16:48.279 "uuid": "2dff0d00-024f-59eb-b9fc-84a9d87ea7d1", 00:16:48.279 "is_configured": true, 00:16:48.279 "data_offset": 2048, 00:16:48.279 "data_size": 63488 00:16:48.279 } 00:16:48.279 ] 00:16:48.279 }' 00:16:48.279 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.279 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.537 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.537 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.537 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.538 [2024-11-06 09:09:47.457054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.538 [2024-11-06 09:09:47.457088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.538 [2024-11-06 09:09:47.460043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.538 [2024-11-06 09:09:47.460238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.538 [2024-11-06 09:09:47.460402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.538 [2024-11-06 09:09:47.460584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:48.538 { 00:16:48.538 "results": [ 00:16:48.538 { 00:16:48.538 "job": "raid_bdev1", 00:16:48.538 "core_mask": "0x1", 00:16:48.538 "workload": "randrw", 00:16:48.538 "percentage": 50, 00:16:48.538 "status": "finished", 00:16:48.538 "queue_depth": 1, 00:16:48.538 "io_size": 131072, 00:16:48.538 "runtime": 1.361148, 00:16:48.538 "iops": 14361.40669493692, 00:16:48.538 "mibps": 1795.175836867115, 00:16:48.538 "io_failed": 1, 00:16:48.538 "io_timeout": 0, 00:16:48.538 "avg_latency_us": 96.31806456477092, 00:16:48.538 "min_latency_us": 27.142168674698794, 00:16:48.538 "max_latency_us": 1750.2586345381526 00:16:48.538 } 00:16:48.538 ], 00:16:48.538 "core_count": 1 00:16:48.538 } 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67035 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67035 ']' 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67035 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67035 00:16:48.538 killing process with pid 67035 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67035' 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67035 00:16:48.538 [2024-11-06 09:09:47.510071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.538 09:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67035 00:16:48.797 [2024-11-06 09:09:47.753672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SES8tTxved 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:16:50.177 00:16:50.177 real 0m4.614s 00:16:50.177 user 0m5.391s 00:16:50.177 sys 0m0.671s 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:50.177 09:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.177 ************************************ 00:16:50.177 END TEST raid_write_error_test 00:16:50.177 ************************************ 00:16:50.177 09:09:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:50.177 09:09:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:50.177 09:09:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:50.177 09:09:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:50.177 09:09:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.177 ************************************ 00:16:50.177 START TEST raid_state_function_test 00:16:50.177 ************************************ 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67180 00:16:50.177 Process raid pid: 67180 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67180' 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67180 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67180 ']' 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:50.177 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.177 [2024-11-06 09:09:49.151245] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:16:50.177 [2024-11-06 09:09:49.151517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.437 [2024-11-06 09:09:49.325657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.437 [2024-11-06 09:09:49.445536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.695 [2024-11-06 09:09:49.663339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.695 [2024-11-06 09:09:49.663370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.954 [2024-11-06 09:09:49.984907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.954 [2024-11-06 09:09:49.984968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.954 [2024-11-06 09:09:49.984981] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.954 [2024-11-06 09:09:49.984993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.954 [2024-11-06 09:09:49.985001] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.954 [2024-11-06 09:09:49.985014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.954 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.233 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.233 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.233 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.233 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.233 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.233 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.233 "name": "Existed_Raid", 00:16:51.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.233 "strip_size_kb": 0, 00:16:51.233 "state": "configuring", 00:16:51.233 "raid_level": "raid1", 00:16:51.233 "superblock": false, 00:16:51.233 "num_base_bdevs": 3, 00:16:51.233 "num_base_bdevs_discovered": 0, 00:16:51.233 "num_base_bdevs_operational": 3, 00:16:51.233 "base_bdevs_list": [ 00:16:51.233 { 00:16:51.233 "name": "BaseBdev1", 00:16:51.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.233 "is_configured": false, 00:16:51.233 "data_offset": 0, 00:16:51.233 "data_size": 0 00:16:51.233 }, 00:16:51.233 { 00:16:51.233 "name": "BaseBdev2", 00:16:51.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.233 "is_configured": false, 00:16:51.233 "data_offset": 0, 00:16:51.233 "data_size": 0 00:16:51.233 }, 00:16:51.233 { 00:16:51.233 "name": "BaseBdev3", 00:16:51.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.233 "is_configured": false, 00:16:51.233 "data_offset": 0, 00:16:51.233 "data_size": 0 00:16:51.233 } 00:16:51.233 ] 00:16:51.233 }' 00:16:51.233 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.233 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.492 [2024-11-06 09:09:50.428442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.492 [2024-11-06 09:09:50.428481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.492 [2024-11-06 09:09:50.436422] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.492 [2024-11-06 09:09:50.436471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.492 [2024-11-06 09:09:50.436482] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.492 [2024-11-06 09:09:50.436494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.492 [2024-11-06 09:09:50.436502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.492 [2024-11-06 09:09:50.436514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.492 [2024-11-06 09:09:50.482085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.492 BaseBdev1 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.492 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.492 [ 00:16:51.492 { 00:16:51.492 "name": "BaseBdev1", 00:16:51.492 "aliases": [ 00:16:51.492 "590f00e0-a1a7-437b-9b0c-65e4e435d1d0" 00:16:51.492 ], 00:16:51.492 "product_name": "Malloc disk", 00:16:51.492 "block_size": 512, 00:16:51.493 "num_blocks": 65536, 00:16:51.493 "uuid": "590f00e0-a1a7-437b-9b0c-65e4e435d1d0", 00:16:51.493 "assigned_rate_limits": { 00:16:51.493 "rw_ios_per_sec": 0, 00:16:51.493 "rw_mbytes_per_sec": 0, 00:16:51.493 "r_mbytes_per_sec": 0, 00:16:51.493 "w_mbytes_per_sec": 0 00:16:51.493 }, 00:16:51.493 "claimed": true, 00:16:51.493 "claim_type": "exclusive_write", 00:16:51.493 "zoned": false, 00:16:51.493 "supported_io_types": { 00:16:51.493 "read": true, 00:16:51.493 "write": true, 00:16:51.493 "unmap": true, 00:16:51.493 "flush": true, 00:16:51.493 "reset": true, 00:16:51.493 "nvme_admin": false, 00:16:51.493 "nvme_io": false, 00:16:51.493 "nvme_io_md": false, 00:16:51.493 "write_zeroes": true, 00:16:51.493 "zcopy": true, 00:16:51.493 "get_zone_info": false, 00:16:51.493 "zone_management": false, 00:16:51.493 "zone_append": false, 00:16:51.493 "compare": false, 00:16:51.493 "compare_and_write": false, 00:16:51.493 "abort": true, 00:16:51.493 "seek_hole": false, 00:16:51.493 "seek_data": false, 00:16:51.493 "copy": true, 00:16:51.493 "nvme_iov_md": false 00:16:51.493 }, 00:16:51.493 "memory_domains": [ 00:16:51.493 { 00:16:51.493 "dma_device_id": "system", 00:16:51.493 "dma_device_type": 1 00:16:51.493 }, 00:16:51.493 { 00:16:51.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.493 "dma_device_type": 2 00:16:51.493 } 00:16:51.493 ], 00:16:51.493 "driver_specific": {} 00:16:51.493 } 00:16:51.493 ] 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.493 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.751 "name": "Existed_Raid", 00:16:51.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.751 "strip_size_kb": 0, 00:16:51.751 "state": "configuring", 00:16:51.751 "raid_level": "raid1", 00:16:51.751 "superblock": false, 00:16:51.751 "num_base_bdevs": 3, 00:16:51.751 "num_base_bdevs_discovered": 1, 00:16:51.751 "num_base_bdevs_operational": 3, 00:16:51.751 "base_bdevs_list": [ 00:16:51.751 { 00:16:51.751 "name": "BaseBdev1", 00:16:51.751 "uuid": "590f00e0-a1a7-437b-9b0c-65e4e435d1d0", 00:16:51.751 "is_configured": true, 00:16:51.751 "data_offset": 0, 00:16:51.751 "data_size": 65536 00:16:51.751 }, 00:16:51.751 { 00:16:51.751 "name": "BaseBdev2", 00:16:51.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.751 "is_configured": false, 00:16:51.751 "data_offset": 0, 00:16:51.751 "data_size": 0 00:16:51.751 }, 00:16:51.751 { 00:16:51.751 "name": "BaseBdev3", 00:16:51.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.751 "is_configured": false, 00:16:51.751 "data_offset": 0, 00:16:51.751 "data_size": 0 00:16:51.751 } 00:16:51.751 ] 00:16:51.751 }' 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.751 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.009 [2024-11-06 09:09:50.925713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.009 [2024-11-06 09:09:50.925922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.009 [2024-11-06 09:09:50.933761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.009 [2024-11-06 09:09:50.935861] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.009 [2024-11-06 09:09:50.935904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.009 [2024-11-06 09:09:50.935915] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.009 [2024-11-06 09:09:50.935928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.009 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.010 "name": "Existed_Raid", 00:16:52.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.010 "strip_size_kb": 0, 00:16:52.010 "state": "configuring", 00:16:52.010 "raid_level": "raid1", 00:16:52.010 "superblock": false, 00:16:52.010 "num_base_bdevs": 3, 00:16:52.010 "num_base_bdevs_discovered": 1, 00:16:52.010 "num_base_bdevs_operational": 3, 00:16:52.010 "base_bdevs_list": [ 00:16:52.010 { 00:16:52.010 "name": "BaseBdev1", 00:16:52.010 "uuid": "590f00e0-a1a7-437b-9b0c-65e4e435d1d0", 00:16:52.010 "is_configured": true, 00:16:52.010 "data_offset": 0, 00:16:52.010 "data_size": 65536 00:16:52.010 }, 00:16:52.010 { 00:16:52.010 "name": "BaseBdev2", 00:16:52.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.010 "is_configured": false, 00:16:52.010 "data_offset": 0, 00:16:52.010 "data_size": 0 00:16:52.010 }, 00:16:52.010 { 00:16:52.010 "name": "BaseBdev3", 00:16:52.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.010 "is_configured": false, 00:16:52.010 "data_offset": 0, 00:16:52.010 "data_size": 0 00:16:52.010 } 00:16:52.010 ] 00:16:52.010 }' 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.010 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 [2024-11-06 09:09:51.401971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.578 BaseBdev2 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 [ 00:16:52.578 { 00:16:52.578 "name": "BaseBdev2", 00:16:52.578 "aliases": [ 00:16:52.578 "21b210bb-2585-4fe4-9a7f-8f3147ff9037" 00:16:52.578 ], 00:16:52.578 "product_name": "Malloc disk", 00:16:52.578 "block_size": 512, 00:16:52.578 "num_blocks": 65536, 00:16:52.578 "uuid": "21b210bb-2585-4fe4-9a7f-8f3147ff9037", 00:16:52.578 "assigned_rate_limits": { 00:16:52.578 "rw_ios_per_sec": 0, 00:16:52.578 "rw_mbytes_per_sec": 0, 00:16:52.578 "r_mbytes_per_sec": 0, 00:16:52.578 "w_mbytes_per_sec": 0 00:16:52.578 }, 00:16:52.578 "claimed": true, 00:16:52.578 "claim_type": "exclusive_write", 00:16:52.578 "zoned": false, 00:16:52.578 "supported_io_types": { 00:16:52.578 "read": true, 00:16:52.578 "write": true, 00:16:52.578 "unmap": true, 00:16:52.578 "flush": true, 00:16:52.578 "reset": true, 00:16:52.578 "nvme_admin": false, 00:16:52.578 "nvme_io": false, 00:16:52.578 "nvme_io_md": false, 00:16:52.578 "write_zeroes": true, 00:16:52.578 "zcopy": true, 00:16:52.578 "get_zone_info": false, 00:16:52.578 "zone_management": false, 00:16:52.578 "zone_append": false, 00:16:52.578 "compare": false, 00:16:52.578 "compare_and_write": false, 00:16:52.578 "abort": true, 00:16:52.578 "seek_hole": false, 00:16:52.578 "seek_data": false, 00:16:52.578 "copy": true, 00:16:52.578 "nvme_iov_md": false 00:16:52.578 }, 00:16:52.578 "memory_domains": [ 00:16:52.578 { 00:16:52.578 "dma_device_id": "system", 00:16:52.578 "dma_device_type": 1 00:16:52.578 }, 00:16:52.578 { 00:16:52.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.578 "dma_device_type": 2 00:16:52.578 } 00:16:52.578 ], 00:16:52.578 "driver_specific": {} 00:16:52.578 } 00:16:52.578 ] 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.578 "name": "Existed_Raid", 00:16:52.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.578 "strip_size_kb": 0, 00:16:52.578 "state": "configuring", 00:16:52.578 "raid_level": "raid1", 00:16:52.578 "superblock": false, 00:16:52.578 "num_base_bdevs": 3, 00:16:52.578 "num_base_bdevs_discovered": 2, 00:16:52.578 "num_base_bdevs_operational": 3, 00:16:52.578 "base_bdevs_list": [ 00:16:52.578 { 00:16:52.578 "name": "BaseBdev1", 00:16:52.578 "uuid": "590f00e0-a1a7-437b-9b0c-65e4e435d1d0", 00:16:52.578 "is_configured": true, 00:16:52.578 "data_offset": 0, 00:16:52.578 "data_size": 65536 00:16:52.578 }, 00:16:52.578 { 00:16:52.578 "name": "BaseBdev2", 00:16:52.578 "uuid": "21b210bb-2585-4fe4-9a7f-8f3147ff9037", 00:16:52.578 "is_configured": true, 00:16:52.578 "data_offset": 0, 00:16:52.578 "data_size": 65536 00:16:52.578 }, 00:16:52.578 { 00:16:52.578 "name": "BaseBdev3", 00:16:52.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.578 "is_configured": false, 00:16:52.578 "data_offset": 0, 00:16:52.578 "data_size": 0 00:16:52.578 } 00:16:52.578 ] 00:16:52.578 }' 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.578 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.145 [2024-11-06 09:09:51.929353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.145 [2024-11-06 09:09:51.929401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:53.145 [2024-11-06 09:09:51.929415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:53.145 [2024-11-06 09:09:51.929902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:53.145 [2024-11-06 09:09:51.930092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:53.145 [2024-11-06 09:09:51.930104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:53.145 [2024-11-06 09:09:51.930389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.145 BaseBdev3 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.145 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.145 [ 00:16:53.145 { 00:16:53.145 "name": "BaseBdev3", 00:16:53.145 "aliases": [ 00:16:53.145 "21d7c8b9-4a4b-4a93-9e41-2b0a1d1b4d0d" 00:16:53.145 ], 00:16:53.145 "product_name": "Malloc disk", 00:16:53.146 "block_size": 512, 00:16:53.146 "num_blocks": 65536, 00:16:53.146 "uuid": "21d7c8b9-4a4b-4a93-9e41-2b0a1d1b4d0d", 00:16:53.146 "assigned_rate_limits": { 00:16:53.146 "rw_ios_per_sec": 0, 00:16:53.146 "rw_mbytes_per_sec": 0, 00:16:53.146 "r_mbytes_per_sec": 0, 00:16:53.146 "w_mbytes_per_sec": 0 00:16:53.146 }, 00:16:53.146 "claimed": true, 00:16:53.146 "claim_type": "exclusive_write", 00:16:53.146 "zoned": false, 00:16:53.146 "supported_io_types": { 00:16:53.146 "read": true, 00:16:53.146 "write": true, 00:16:53.146 "unmap": true, 00:16:53.146 "flush": true, 00:16:53.146 "reset": true, 00:16:53.146 "nvme_admin": false, 00:16:53.146 "nvme_io": false, 00:16:53.146 "nvme_io_md": false, 00:16:53.146 "write_zeroes": true, 00:16:53.146 "zcopy": true, 00:16:53.146 "get_zone_info": false, 00:16:53.146 "zone_management": false, 00:16:53.146 "zone_append": false, 00:16:53.146 "compare": false, 00:16:53.146 "compare_and_write": false, 00:16:53.146 "abort": true, 00:16:53.146 "seek_hole": false, 00:16:53.146 "seek_data": false, 00:16:53.146 "copy": true, 00:16:53.146 "nvme_iov_md": false 00:16:53.146 }, 00:16:53.146 "memory_domains": [ 00:16:53.146 { 00:16:53.146 "dma_device_id": "system", 00:16:53.146 "dma_device_type": 1 00:16:53.146 }, 00:16:53.146 { 00:16:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.146 "dma_device_type": 2 00:16:53.146 } 00:16:53.146 ], 00:16:53.146 "driver_specific": {} 00:16:53.146 } 00:16:53.146 ] 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.146 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.146 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.146 "name": "Existed_Raid", 00:16:53.146 "uuid": "8f6f96b6-7fe0-4b6a-b050-d113db64d2c4", 00:16:53.146 "strip_size_kb": 0, 00:16:53.146 "state": "online", 00:16:53.146 "raid_level": "raid1", 00:16:53.146 "superblock": false, 00:16:53.146 "num_base_bdevs": 3, 00:16:53.146 "num_base_bdevs_discovered": 3, 00:16:53.146 "num_base_bdevs_operational": 3, 00:16:53.146 "base_bdevs_list": [ 00:16:53.146 { 00:16:53.146 "name": "BaseBdev1", 00:16:53.146 "uuid": "590f00e0-a1a7-437b-9b0c-65e4e435d1d0", 00:16:53.146 "is_configured": true, 00:16:53.146 "data_offset": 0, 00:16:53.146 "data_size": 65536 00:16:53.146 }, 00:16:53.146 { 00:16:53.146 "name": "BaseBdev2", 00:16:53.146 "uuid": "21b210bb-2585-4fe4-9a7f-8f3147ff9037", 00:16:53.146 "is_configured": true, 00:16:53.146 "data_offset": 0, 00:16:53.146 "data_size": 65536 00:16:53.146 }, 00:16:53.146 { 00:16:53.146 "name": "BaseBdev3", 00:16:53.146 "uuid": "21d7c8b9-4a4b-4a93-9e41-2b0a1d1b4d0d", 00:16:53.146 "is_configured": true, 00:16:53.146 "data_offset": 0, 00:16:53.146 "data_size": 65536 00:16:53.146 } 00:16:53.146 ] 00:16:53.146 }' 00:16:53.146 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.146 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.405 [2024-11-06 09:09:52.357072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.405 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.405 "name": "Existed_Raid", 00:16:53.405 "aliases": [ 00:16:53.405 "8f6f96b6-7fe0-4b6a-b050-d113db64d2c4" 00:16:53.405 ], 00:16:53.405 "product_name": "Raid Volume", 00:16:53.405 "block_size": 512, 00:16:53.405 "num_blocks": 65536, 00:16:53.405 "uuid": "8f6f96b6-7fe0-4b6a-b050-d113db64d2c4", 00:16:53.405 "assigned_rate_limits": { 00:16:53.405 "rw_ios_per_sec": 0, 00:16:53.405 "rw_mbytes_per_sec": 0, 00:16:53.405 "r_mbytes_per_sec": 0, 00:16:53.405 "w_mbytes_per_sec": 0 00:16:53.405 }, 00:16:53.405 "claimed": false, 00:16:53.405 "zoned": false, 00:16:53.405 "supported_io_types": { 00:16:53.405 "read": true, 00:16:53.405 "write": true, 00:16:53.405 "unmap": false, 00:16:53.405 "flush": false, 00:16:53.405 "reset": true, 00:16:53.405 "nvme_admin": false, 00:16:53.405 "nvme_io": false, 00:16:53.405 "nvme_io_md": false, 00:16:53.405 "write_zeroes": true, 00:16:53.405 "zcopy": false, 00:16:53.405 "get_zone_info": false, 00:16:53.405 "zone_management": false, 00:16:53.405 "zone_append": false, 00:16:53.405 "compare": false, 00:16:53.405 "compare_and_write": false, 00:16:53.405 "abort": false, 00:16:53.405 "seek_hole": false, 00:16:53.405 "seek_data": false, 00:16:53.405 "copy": false, 00:16:53.405 "nvme_iov_md": false 00:16:53.405 }, 00:16:53.405 "memory_domains": [ 00:16:53.405 { 00:16:53.405 "dma_device_id": "system", 00:16:53.405 "dma_device_type": 1 00:16:53.405 }, 00:16:53.405 { 00:16:53.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.405 "dma_device_type": 2 00:16:53.405 }, 00:16:53.405 { 00:16:53.405 "dma_device_id": "system", 00:16:53.405 "dma_device_type": 1 00:16:53.405 }, 00:16:53.405 { 00:16:53.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.405 "dma_device_type": 2 00:16:53.405 }, 00:16:53.405 { 00:16:53.405 "dma_device_id": "system", 00:16:53.405 "dma_device_type": 1 00:16:53.405 }, 00:16:53.405 { 00:16:53.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.405 "dma_device_type": 2 00:16:53.405 } 00:16:53.405 ], 00:16:53.405 "driver_specific": { 00:16:53.405 "raid": { 00:16:53.405 "uuid": "8f6f96b6-7fe0-4b6a-b050-d113db64d2c4", 00:16:53.405 "strip_size_kb": 0, 00:16:53.405 "state": "online", 00:16:53.405 "raid_level": "raid1", 00:16:53.405 "superblock": false, 00:16:53.405 "num_base_bdevs": 3, 00:16:53.405 "num_base_bdevs_discovered": 3, 00:16:53.405 "num_base_bdevs_operational": 3, 00:16:53.405 "base_bdevs_list": [ 00:16:53.405 { 00:16:53.405 "name": "BaseBdev1", 00:16:53.405 "uuid": "590f00e0-a1a7-437b-9b0c-65e4e435d1d0", 00:16:53.405 "is_configured": true, 00:16:53.405 "data_offset": 0, 00:16:53.405 "data_size": 65536 00:16:53.405 }, 00:16:53.405 { 00:16:53.405 "name": "BaseBdev2", 00:16:53.406 "uuid": "21b210bb-2585-4fe4-9a7f-8f3147ff9037", 00:16:53.406 "is_configured": true, 00:16:53.406 "data_offset": 0, 00:16:53.406 "data_size": 65536 00:16:53.406 }, 00:16:53.406 { 00:16:53.406 "name": "BaseBdev3", 00:16:53.406 "uuid": "21d7c8b9-4a4b-4a93-9e41-2b0a1d1b4d0d", 00:16:53.406 "is_configured": true, 00:16:53.406 "data_offset": 0, 00:16:53.406 "data_size": 65536 00:16:53.406 } 00:16:53.406 ] 00:16:53.406 } 00:16:53.406 } 00:16:53.406 }' 00:16:53.406 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.406 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:53.406 BaseBdev2 00:16:53.406 BaseBdev3' 00:16:53.406 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.664 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 [2024-11-06 09:09:52.628461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.922 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.923 "name": "Existed_Raid", 00:16:53.923 "uuid": "8f6f96b6-7fe0-4b6a-b050-d113db64d2c4", 00:16:53.923 "strip_size_kb": 0, 00:16:53.923 "state": "online", 00:16:53.923 "raid_level": "raid1", 00:16:53.923 "superblock": false, 00:16:53.923 "num_base_bdevs": 3, 00:16:53.923 "num_base_bdevs_discovered": 2, 00:16:53.923 "num_base_bdevs_operational": 2, 00:16:53.923 "base_bdevs_list": [ 00:16:53.923 { 00:16:53.923 "name": null, 00:16:53.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.923 "is_configured": false, 00:16:53.923 "data_offset": 0, 00:16:53.923 "data_size": 65536 00:16:53.923 }, 00:16:53.923 { 00:16:53.923 "name": "BaseBdev2", 00:16:53.923 "uuid": "21b210bb-2585-4fe4-9a7f-8f3147ff9037", 00:16:53.923 "is_configured": true, 00:16:53.923 "data_offset": 0, 00:16:53.923 "data_size": 65536 00:16:53.923 }, 00:16:53.923 { 00:16:53.923 "name": "BaseBdev3", 00:16:53.923 "uuid": "21d7c8b9-4a4b-4a93-9e41-2b0a1d1b4d0d", 00:16:53.923 "is_configured": true, 00:16:53.923 "data_offset": 0, 00:16:53.923 "data_size": 65536 00:16:53.923 } 00:16:53.923 ] 00:16:53.923 }' 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.923 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.181 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 [2024-11-06 09:09:53.133308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.439 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.439 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:54.439 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.439 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 [2024-11-06 09:09:53.285176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:54.440 [2024-11-06 09:09:53.285288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.440 [2024-11-06 09:09:53.382220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.440 [2024-11-06 09:09:53.382288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.440 [2024-11-06 09:09:53.382305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 BaseBdev2 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.440 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 [ 00:16:54.699 { 00:16:54.699 "name": "BaseBdev2", 00:16:54.699 "aliases": [ 00:16:54.699 "6923a520-9238-40f0-af19-1df25ccd1e6c" 00:16:54.699 ], 00:16:54.699 "product_name": "Malloc disk", 00:16:54.699 "block_size": 512, 00:16:54.699 "num_blocks": 65536, 00:16:54.699 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:54.699 "assigned_rate_limits": { 00:16:54.699 "rw_ios_per_sec": 0, 00:16:54.699 "rw_mbytes_per_sec": 0, 00:16:54.699 "r_mbytes_per_sec": 0, 00:16:54.699 "w_mbytes_per_sec": 0 00:16:54.699 }, 00:16:54.699 "claimed": false, 00:16:54.699 "zoned": false, 00:16:54.699 "supported_io_types": { 00:16:54.699 "read": true, 00:16:54.699 "write": true, 00:16:54.699 "unmap": true, 00:16:54.699 "flush": true, 00:16:54.699 "reset": true, 00:16:54.699 "nvme_admin": false, 00:16:54.699 "nvme_io": false, 00:16:54.699 "nvme_io_md": false, 00:16:54.699 "write_zeroes": true, 00:16:54.699 "zcopy": true, 00:16:54.699 "get_zone_info": false, 00:16:54.699 "zone_management": false, 00:16:54.699 "zone_append": false, 00:16:54.699 "compare": false, 00:16:54.699 "compare_and_write": false, 00:16:54.699 "abort": true, 00:16:54.699 "seek_hole": false, 00:16:54.699 "seek_data": false, 00:16:54.699 "copy": true, 00:16:54.699 "nvme_iov_md": false 00:16:54.699 }, 00:16:54.699 "memory_domains": [ 00:16:54.699 { 00:16:54.699 "dma_device_id": "system", 00:16:54.699 "dma_device_type": 1 00:16:54.699 }, 00:16:54.699 { 00:16:54.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.699 "dma_device_type": 2 00:16:54.699 } 00:16:54.699 ], 00:16:54.699 "driver_specific": {} 00:16:54.699 } 00:16:54.699 ] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 BaseBdev3 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 [ 00:16:54.699 { 00:16:54.699 "name": "BaseBdev3", 00:16:54.699 "aliases": [ 00:16:54.699 "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95" 00:16:54.699 ], 00:16:54.699 "product_name": "Malloc disk", 00:16:54.699 "block_size": 512, 00:16:54.699 "num_blocks": 65536, 00:16:54.699 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:54.699 "assigned_rate_limits": { 00:16:54.699 "rw_ios_per_sec": 0, 00:16:54.699 "rw_mbytes_per_sec": 0, 00:16:54.699 "r_mbytes_per_sec": 0, 00:16:54.699 "w_mbytes_per_sec": 0 00:16:54.699 }, 00:16:54.699 "claimed": false, 00:16:54.699 "zoned": false, 00:16:54.699 "supported_io_types": { 00:16:54.699 "read": true, 00:16:54.699 "write": true, 00:16:54.699 "unmap": true, 00:16:54.699 "flush": true, 00:16:54.699 "reset": true, 00:16:54.699 "nvme_admin": false, 00:16:54.699 "nvme_io": false, 00:16:54.699 "nvme_io_md": false, 00:16:54.699 "write_zeroes": true, 00:16:54.699 "zcopy": true, 00:16:54.699 "get_zone_info": false, 00:16:54.699 "zone_management": false, 00:16:54.699 "zone_append": false, 00:16:54.699 "compare": false, 00:16:54.699 "compare_and_write": false, 00:16:54.699 "abort": true, 00:16:54.699 "seek_hole": false, 00:16:54.699 "seek_data": false, 00:16:54.699 "copy": true, 00:16:54.699 "nvme_iov_md": false 00:16:54.699 }, 00:16:54.699 "memory_domains": [ 00:16:54.699 { 00:16:54.699 "dma_device_id": "system", 00:16:54.699 "dma_device_type": 1 00:16:54.699 }, 00:16:54.699 { 00:16:54.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.699 "dma_device_type": 2 00:16:54.699 } 00:16:54.699 ], 00:16:54.699 "driver_specific": {} 00:16:54.699 } 00:16:54.699 ] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 [2024-11-06 09:09:53.606446] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.699 [2024-11-06 09:09:53.606500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.699 [2024-11-06 09:09:53.606539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.699 [2024-11-06 09:09:53.608661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.699 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.700 "name": "Existed_Raid", 00:16:54.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.700 "strip_size_kb": 0, 00:16:54.700 "state": "configuring", 00:16:54.700 "raid_level": "raid1", 00:16:54.700 "superblock": false, 00:16:54.700 "num_base_bdevs": 3, 00:16:54.700 "num_base_bdevs_discovered": 2, 00:16:54.700 "num_base_bdevs_operational": 3, 00:16:54.700 "base_bdevs_list": [ 00:16:54.700 { 00:16:54.700 "name": "BaseBdev1", 00:16:54.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.700 "is_configured": false, 00:16:54.700 "data_offset": 0, 00:16:54.700 "data_size": 0 00:16:54.700 }, 00:16:54.700 { 00:16:54.700 "name": "BaseBdev2", 00:16:54.700 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:54.700 "is_configured": true, 00:16:54.700 "data_offset": 0, 00:16:54.700 "data_size": 65536 00:16:54.700 }, 00:16:54.700 { 00:16:54.700 "name": "BaseBdev3", 00:16:54.700 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:54.700 "is_configured": true, 00:16:54.700 "data_offset": 0, 00:16:54.700 "data_size": 65536 00:16:54.700 } 00:16:54.700 ] 00:16:54.700 }' 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.700 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 [2024-11-06 09:09:54.014129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.267 "name": "Existed_Raid", 00:16:55.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.267 "strip_size_kb": 0, 00:16:55.267 "state": "configuring", 00:16:55.267 "raid_level": "raid1", 00:16:55.267 "superblock": false, 00:16:55.267 "num_base_bdevs": 3, 00:16:55.267 "num_base_bdevs_discovered": 1, 00:16:55.267 "num_base_bdevs_operational": 3, 00:16:55.267 "base_bdevs_list": [ 00:16:55.267 { 00:16:55.267 "name": "BaseBdev1", 00:16:55.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.267 "is_configured": false, 00:16:55.267 "data_offset": 0, 00:16:55.267 "data_size": 0 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "name": null, 00:16:55.267 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:55.267 "is_configured": false, 00:16:55.267 "data_offset": 0, 00:16:55.267 "data_size": 65536 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "name": "BaseBdev3", 00:16:55.267 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:55.267 "is_configured": true, 00:16:55.267 "data_offset": 0, 00:16:55.267 "data_size": 65536 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }' 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.267 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.525 [2024-11-06 09:09:54.471937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.525 BaseBdev1 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.525 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.526 [ 00:16:55.526 { 00:16:55.526 "name": "BaseBdev1", 00:16:55.526 "aliases": [ 00:16:55.526 "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84" 00:16:55.526 ], 00:16:55.526 "product_name": "Malloc disk", 00:16:55.526 "block_size": 512, 00:16:55.526 "num_blocks": 65536, 00:16:55.526 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:55.526 "assigned_rate_limits": { 00:16:55.526 "rw_ios_per_sec": 0, 00:16:55.526 "rw_mbytes_per_sec": 0, 00:16:55.526 "r_mbytes_per_sec": 0, 00:16:55.526 "w_mbytes_per_sec": 0 00:16:55.526 }, 00:16:55.526 "claimed": true, 00:16:55.526 "claim_type": "exclusive_write", 00:16:55.526 "zoned": false, 00:16:55.526 "supported_io_types": { 00:16:55.526 "read": true, 00:16:55.526 "write": true, 00:16:55.526 "unmap": true, 00:16:55.526 "flush": true, 00:16:55.526 "reset": true, 00:16:55.526 "nvme_admin": false, 00:16:55.526 "nvme_io": false, 00:16:55.526 "nvme_io_md": false, 00:16:55.526 "write_zeroes": true, 00:16:55.526 "zcopy": true, 00:16:55.526 "get_zone_info": false, 00:16:55.526 "zone_management": false, 00:16:55.526 "zone_append": false, 00:16:55.526 "compare": false, 00:16:55.526 "compare_and_write": false, 00:16:55.526 "abort": true, 00:16:55.526 "seek_hole": false, 00:16:55.526 "seek_data": false, 00:16:55.526 "copy": true, 00:16:55.526 "nvme_iov_md": false 00:16:55.526 }, 00:16:55.526 "memory_domains": [ 00:16:55.526 { 00:16:55.526 "dma_device_id": "system", 00:16:55.526 "dma_device_type": 1 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.526 "dma_device_type": 2 00:16:55.526 } 00:16:55.526 ], 00:16:55.526 "driver_specific": {} 00:16:55.526 } 00:16:55.526 ] 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.526 "name": "Existed_Raid", 00:16:55.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.526 "strip_size_kb": 0, 00:16:55.526 "state": "configuring", 00:16:55.526 "raid_level": "raid1", 00:16:55.526 "superblock": false, 00:16:55.526 "num_base_bdevs": 3, 00:16:55.526 "num_base_bdevs_discovered": 2, 00:16:55.526 "num_base_bdevs_operational": 3, 00:16:55.526 "base_bdevs_list": [ 00:16:55.526 { 00:16:55.526 "name": "BaseBdev1", 00:16:55.526 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:55.526 "is_configured": true, 00:16:55.526 "data_offset": 0, 00:16:55.526 "data_size": 65536 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "name": null, 00:16:55.526 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:55.526 "is_configured": false, 00:16:55.526 "data_offset": 0, 00:16:55.526 "data_size": 65536 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "name": "BaseBdev3", 00:16:55.526 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:55.526 "is_configured": true, 00:16:55.526 "data_offset": 0, 00:16:55.526 "data_size": 65536 00:16:55.526 } 00:16:55.526 ] 00:16:55.526 }' 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.526 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.093 [2024-11-06 09:09:54.971394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.093 09:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.093 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.093 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.093 "name": "Existed_Raid", 00:16:56.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.093 "strip_size_kb": 0, 00:16:56.093 "state": "configuring", 00:16:56.093 "raid_level": "raid1", 00:16:56.093 "superblock": false, 00:16:56.093 "num_base_bdevs": 3, 00:16:56.093 "num_base_bdevs_discovered": 1, 00:16:56.093 "num_base_bdevs_operational": 3, 00:16:56.093 "base_bdevs_list": [ 00:16:56.093 { 00:16:56.093 "name": "BaseBdev1", 00:16:56.093 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:56.093 "is_configured": true, 00:16:56.093 "data_offset": 0, 00:16:56.093 "data_size": 65536 00:16:56.093 }, 00:16:56.093 { 00:16:56.093 "name": null, 00:16:56.093 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:56.093 "is_configured": false, 00:16:56.093 "data_offset": 0, 00:16:56.093 "data_size": 65536 00:16:56.093 }, 00:16:56.093 { 00:16:56.093 "name": null, 00:16:56.093 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:56.093 "is_configured": false, 00:16:56.093 "data_offset": 0, 00:16:56.093 "data_size": 65536 00:16:56.093 } 00:16:56.093 ] 00:16:56.093 }' 00:16:56.093 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.093 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.663 [2024-11-06 09:09:55.450932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.663 "name": "Existed_Raid", 00:16:56.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.663 "strip_size_kb": 0, 00:16:56.663 "state": "configuring", 00:16:56.663 "raid_level": "raid1", 00:16:56.663 "superblock": false, 00:16:56.663 "num_base_bdevs": 3, 00:16:56.663 "num_base_bdevs_discovered": 2, 00:16:56.663 "num_base_bdevs_operational": 3, 00:16:56.663 "base_bdevs_list": [ 00:16:56.663 { 00:16:56.663 "name": "BaseBdev1", 00:16:56.663 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:56.663 "is_configured": true, 00:16:56.663 "data_offset": 0, 00:16:56.663 "data_size": 65536 00:16:56.663 }, 00:16:56.663 { 00:16:56.663 "name": null, 00:16:56.663 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:56.663 "is_configured": false, 00:16:56.663 "data_offset": 0, 00:16:56.663 "data_size": 65536 00:16:56.663 }, 00:16:56.663 { 00:16:56.663 "name": "BaseBdev3", 00:16:56.663 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:56.663 "is_configured": true, 00:16:56.663 "data_offset": 0, 00:16:56.663 "data_size": 65536 00:16:56.663 } 00:16:56.663 ] 00:16:56.663 }' 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.663 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.954 09:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.954 [2024-11-06 09:09:55.926449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.212 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.212 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:57.212 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.212 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.213 "name": "Existed_Raid", 00:16:57.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.213 "strip_size_kb": 0, 00:16:57.213 "state": "configuring", 00:16:57.213 "raid_level": "raid1", 00:16:57.213 "superblock": false, 00:16:57.213 "num_base_bdevs": 3, 00:16:57.213 "num_base_bdevs_discovered": 1, 00:16:57.213 "num_base_bdevs_operational": 3, 00:16:57.213 "base_bdevs_list": [ 00:16:57.213 { 00:16:57.213 "name": null, 00:16:57.213 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:57.213 "is_configured": false, 00:16:57.213 "data_offset": 0, 00:16:57.213 "data_size": 65536 00:16:57.213 }, 00:16:57.213 { 00:16:57.213 "name": null, 00:16:57.213 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:57.213 "is_configured": false, 00:16:57.213 "data_offset": 0, 00:16:57.213 "data_size": 65536 00:16:57.213 }, 00:16:57.213 { 00:16:57.213 "name": "BaseBdev3", 00:16:57.213 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:57.213 "is_configured": true, 00:16:57.213 "data_offset": 0, 00:16:57.213 "data_size": 65536 00:16:57.213 } 00:16:57.213 ] 00:16:57.213 }' 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.213 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.472 [2024-11-06 09:09:56.478208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.472 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.732 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.732 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.732 "name": "Existed_Raid", 00:16:57.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.732 "strip_size_kb": 0, 00:16:57.732 "state": "configuring", 00:16:57.732 "raid_level": "raid1", 00:16:57.732 "superblock": false, 00:16:57.732 "num_base_bdevs": 3, 00:16:57.732 "num_base_bdevs_discovered": 2, 00:16:57.732 "num_base_bdevs_operational": 3, 00:16:57.732 "base_bdevs_list": [ 00:16:57.732 { 00:16:57.732 "name": null, 00:16:57.732 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:57.732 "is_configured": false, 00:16:57.732 "data_offset": 0, 00:16:57.732 "data_size": 65536 00:16:57.732 }, 00:16:57.732 { 00:16:57.732 "name": "BaseBdev2", 00:16:57.732 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:57.732 "is_configured": true, 00:16:57.732 "data_offset": 0, 00:16:57.732 "data_size": 65536 00:16:57.732 }, 00:16:57.732 { 00:16:57.732 "name": "BaseBdev3", 00:16:57.732 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:57.732 "is_configured": true, 00:16:57.732 "data_offset": 0, 00:16:57.732 "data_size": 65536 00:16:57.732 } 00:16:57.732 ] 00:16:57.732 }' 00:16:57.732 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.732 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.992 09:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.992 [2024-11-06 09:09:57.012011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:57.992 [2024-11-06 09:09:57.012069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:57.992 [2024-11-06 09:09:57.012078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:57.992 [2024-11-06 09:09:57.012362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:57.992 [2024-11-06 09:09:57.012538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:57.992 [2024-11-06 09:09:57.012555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:57.992 [2024-11-06 09:09:57.012841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.992 NewBaseBdev 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.992 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.251 [ 00:16:58.251 { 00:16:58.251 "name": "NewBaseBdev", 00:16:58.251 "aliases": [ 00:16:58.251 "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84" 00:16:58.251 ], 00:16:58.251 "product_name": "Malloc disk", 00:16:58.251 "block_size": 512, 00:16:58.251 "num_blocks": 65536, 00:16:58.251 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:58.251 "assigned_rate_limits": { 00:16:58.251 "rw_ios_per_sec": 0, 00:16:58.251 "rw_mbytes_per_sec": 0, 00:16:58.251 "r_mbytes_per_sec": 0, 00:16:58.251 "w_mbytes_per_sec": 0 00:16:58.251 }, 00:16:58.251 "claimed": true, 00:16:58.251 "claim_type": "exclusive_write", 00:16:58.251 "zoned": false, 00:16:58.251 "supported_io_types": { 00:16:58.251 "read": true, 00:16:58.251 "write": true, 00:16:58.251 "unmap": true, 00:16:58.251 "flush": true, 00:16:58.251 "reset": true, 00:16:58.251 "nvme_admin": false, 00:16:58.251 "nvme_io": false, 00:16:58.251 "nvme_io_md": false, 00:16:58.251 "write_zeroes": true, 00:16:58.251 "zcopy": true, 00:16:58.251 "get_zone_info": false, 00:16:58.251 "zone_management": false, 00:16:58.251 "zone_append": false, 00:16:58.251 "compare": false, 00:16:58.251 "compare_and_write": false, 00:16:58.251 "abort": true, 00:16:58.251 "seek_hole": false, 00:16:58.251 "seek_data": false, 00:16:58.251 "copy": true, 00:16:58.251 "nvme_iov_md": false 00:16:58.251 }, 00:16:58.251 "memory_domains": [ 00:16:58.251 { 00:16:58.251 "dma_device_id": "system", 00:16:58.251 "dma_device_type": 1 00:16:58.251 }, 00:16:58.251 { 00:16:58.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.251 "dma_device_type": 2 00:16:58.251 } 00:16:58.251 ], 00:16:58.251 "driver_specific": {} 00:16:58.251 } 00:16:58.251 ] 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.251 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.251 "name": "Existed_Raid", 00:16:58.251 "uuid": "5ff1943b-6052-40ef-8b5f-b8fefcef7b2b", 00:16:58.251 "strip_size_kb": 0, 00:16:58.251 "state": "online", 00:16:58.251 "raid_level": "raid1", 00:16:58.251 "superblock": false, 00:16:58.252 "num_base_bdevs": 3, 00:16:58.252 "num_base_bdevs_discovered": 3, 00:16:58.252 "num_base_bdevs_operational": 3, 00:16:58.252 "base_bdevs_list": [ 00:16:58.252 { 00:16:58.252 "name": "NewBaseBdev", 00:16:58.252 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:58.252 "is_configured": true, 00:16:58.252 "data_offset": 0, 00:16:58.252 "data_size": 65536 00:16:58.252 }, 00:16:58.252 { 00:16:58.252 "name": "BaseBdev2", 00:16:58.252 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:58.252 "is_configured": true, 00:16:58.252 "data_offset": 0, 00:16:58.252 "data_size": 65536 00:16:58.252 }, 00:16:58.252 { 00:16:58.252 "name": "BaseBdev3", 00:16:58.252 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:58.252 "is_configured": true, 00:16:58.252 "data_offset": 0, 00:16:58.252 "data_size": 65536 00:16:58.252 } 00:16:58.252 ] 00:16:58.252 }' 00:16:58.252 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.252 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.511 [2024-11-06 09:09:57.447731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.511 "name": "Existed_Raid", 00:16:58.511 "aliases": [ 00:16:58.511 "5ff1943b-6052-40ef-8b5f-b8fefcef7b2b" 00:16:58.511 ], 00:16:58.511 "product_name": "Raid Volume", 00:16:58.511 "block_size": 512, 00:16:58.511 "num_blocks": 65536, 00:16:58.511 "uuid": "5ff1943b-6052-40ef-8b5f-b8fefcef7b2b", 00:16:58.511 "assigned_rate_limits": { 00:16:58.511 "rw_ios_per_sec": 0, 00:16:58.511 "rw_mbytes_per_sec": 0, 00:16:58.511 "r_mbytes_per_sec": 0, 00:16:58.511 "w_mbytes_per_sec": 0 00:16:58.511 }, 00:16:58.511 "claimed": false, 00:16:58.511 "zoned": false, 00:16:58.511 "supported_io_types": { 00:16:58.511 "read": true, 00:16:58.511 "write": true, 00:16:58.511 "unmap": false, 00:16:58.511 "flush": false, 00:16:58.511 "reset": true, 00:16:58.511 "nvme_admin": false, 00:16:58.511 "nvme_io": false, 00:16:58.511 "nvme_io_md": false, 00:16:58.511 "write_zeroes": true, 00:16:58.511 "zcopy": false, 00:16:58.511 "get_zone_info": false, 00:16:58.511 "zone_management": false, 00:16:58.511 "zone_append": false, 00:16:58.511 "compare": false, 00:16:58.511 "compare_and_write": false, 00:16:58.511 "abort": false, 00:16:58.511 "seek_hole": false, 00:16:58.511 "seek_data": false, 00:16:58.511 "copy": false, 00:16:58.511 "nvme_iov_md": false 00:16:58.511 }, 00:16:58.511 "memory_domains": [ 00:16:58.511 { 00:16:58.511 "dma_device_id": "system", 00:16:58.511 "dma_device_type": 1 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.511 "dma_device_type": 2 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "dma_device_id": "system", 00:16:58.511 "dma_device_type": 1 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.511 "dma_device_type": 2 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "dma_device_id": "system", 00:16:58.511 "dma_device_type": 1 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.511 "dma_device_type": 2 00:16:58.511 } 00:16:58.511 ], 00:16:58.511 "driver_specific": { 00:16:58.511 "raid": { 00:16:58.511 "uuid": "5ff1943b-6052-40ef-8b5f-b8fefcef7b2b", 00:16:58.511 "strip_size_kb": 0, 00:16:58.511 "state": "online", 00:16:58.511 "raid_level": "raid1", 00:16:58.511 "superblock": false, 00:16:58.511 "num_base_bdevs": 3, 00:16:58.511 "num_base_bdevs_discovered": 3, 00:16:58.511 "num_base_bdevs_operational": 3, 00:16:58.511 "base_bdevs_list": [ 00:16:58.511 { 00:16:58.511 "name": "NewBaseBdev", 00:16:58.511 "uuid": "8cfd9f38-4e00-4a2f-8fd8-7aa9f73a0d84", 00:16:58.511 "is_configured": true, 00:16:58.511 "data_offset": 0, 00:16:58.511 "data_size": 65536 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "name": "BaseBdev2", 00:16:58.511 "uuid": "6923a520-9238-40f0-af19-1df25ccd1e6c", 00:16:58.511 "is_configured": true, 00:16:58.511 "data_offset": 0, 00:16:58.511 "data_size": 65536 00:16:58.511 }, 00:16:58.511 { 00:16:58.511 "name": "BaseBdev3", 00:16:58.511 "uuid": "9e4a96c3-2cc7-4f63-9b1e-f7d807269d95", 00:16:58.511 "is_configured": true, 00:16:58.511 "data_offset": 0, 00:16:58.511 "data_size": 65536 00:16:58.511 } 00:16:58.511 ] 00:16:58.511 } 00:16:58.511 } 00:16:58.511 }' 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:58.511 BaseBdev2 00:16:58.511 BaseBdev3' 00:16:58.511 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.770 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.770 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.771 [2024-11-06 09:09:57.699188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.771 [2024-11-06 09:09:57.699227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.771 [2024-11-06 09:09:57.699317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.771 [2024-11-06 09:09:57.699624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.771 [2024-11-06 09:09:57.699648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67180 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67180 ']' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67180 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67180 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67180' 00:16:58.771 killing process with pid 67180 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67180 00:16:58.771 [2024-11-06 09:09:57.754500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.771 09:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67180 00:16:59.030 [2024-11-06 09:09:58.057985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:00.418 00:17:00.418 real 0m10.123s 00:17:00.418 user 0m16.077s 00:17:00.418 sys 0m2.034s 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:00.418 ************************************ 00:17:00.418 END TEST raid_state_function_test 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.418 ************************************ 00:17:00.418 09:09:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:00.418 09:09:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:00.418 09:09:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:00.418 09:09:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.418 ************************************ 00:17:00.418 START TEST raid_state_function_test_sb 00:17:00.418 ************************************ 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67801 00:17:00.418 Process raid pid: 67801 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67801' 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67801 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 67801 ']' 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:00.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:00.418 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.418 [2024-11-06 09:09:59.340048] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:00.418 [2024-11-06 09:09:59.340180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.676 [2024-11-06 09:09:59.520822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.676 [2024-11-06 09:09:59.640715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.934 [2024-11-06 09:09:59.853735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.934 [2024-11-06 09:09:59.853785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.191 [2024-11-06 09:10:00.182615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.191 [2024-11-06 09:10:00.182721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.191 [2024-11-06 09:10:00.182741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.191 [2024-11-06 09:10:00.182764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.191 [2024-11-06 09:10:00.182778] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.191 [2024-11-06 09:10:00.182800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.191 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.192 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.192 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.192 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.192 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.192 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.192 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.450 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.450 "name": "Existed_Raid", 00:17:01.450 "uuid": "6383729d-460f-4c0e-ab07-5509a05177ef", 00:17:01.450 "strip_size_kb": 0, 00:17:01.450 "state": "configuring", 00:17:01.450 "raid_level": "raid1", 00:17:01.450 "superblock": true, 00:17:01.450 "num_base_bdevs": 3, 00:17:01.450 "num_base_bdevs_discovered": 0, 00:17:01.450 "num_base_bdevs_operational": 3, 00:17:01.450 "base_bdevs_list": [ 00:17:01.450 { 00:17:01.450 "name": "BaseBdev1", 00:17:01.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.450 "is_configured": false, 00:17:01.450 "data_offset": 0, 00:17:01.450 "data_size": 0 00:17:01.450 }, 00:17:01.450 { 00:17:01.450 "name": "BaseBdev2", 00:17:01.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.450 "is_configured": false, 00:17:01.450 "data_offset": 0, 00:17:01.450 "data_size": 0 00:17:01.450 }, 00:17:01.450 { 00:17:01.450 "name": "BaseBdev3", 00:17:01.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.450 "is_configured": false, 00:17:01.450 "data_offset": 0, 00:17:01.450 "data_size": 0 00:17:01.450 } 00:17:01.450 ] 00:17:01.450 }' 00:17:01.450 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.450 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 [2024-11-06 09:10:00.630523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.709 [2024-11-06 09:10:00.630599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 [2024-11-06 09:10:00.642474] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.709 [2024-11-06 09:10:00.642544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.709 [2024-11-06 09:10:00.642561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.709 [2024-11-06 09:10:00.642588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.709 [2024-11-06 09:10:00.642601] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.709 [2024-11-06 09:10:00.642617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 [2024-11-06 09:10:00.697551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.709 BaseBdev1 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.709 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.709 [ 00:17:01.709 { 00:17:01.709 "name": "BaseBdev1", 00:17:01.709 "aliases": [ 00:17:01.709 "63b36827-20eb-42d6-9833-be4f62264345" 00:17:01.709 ], 00:17:01.709 "product_name": "Malloc disk", 00:17:01.709 "block_size": 512, 00:17:01.709 "num_blocks": 65536, 00:17:01.709 "uuid": "63b36827-20eb-42d6-9833-be4f62264345", 00:17:01.709 "assigned_rate_limits": { 00:17:01.709 "rw_ios_per_sec": 0, 00:17:01.709 "rw_mbytes_per_sec": 0, 00:17:01.709 "r_mbytes_per_sec": 0, 00:17:01.709 "w_mbytes_per_sec": 0 00:17:01.709 }, 00:17:01.709 "claimed": true, 00:17:01.709 "claim_type": "exclusive_write", 00:17:01.709 "zoned": false, 00:17:01.709 "supported_io_types": { 00:17:01.709 "read": true, 00:17:01.709 "write": true, 00:17:01.709 "unmap": true, 00:17:01.709 "flush": true, 00:17:01.709 "reset": true, 00:17:01.709 "nvme_admin": false, 00:17:01.709 "nvme_io": false, 00:17:01.709 "nvme_io_md": false, 00:17:01.709 "write_zeroes": true, 00:17:01.709 "zcopy": true, 00:17:01.709 "get_zone_info": false, 00:17:01.709 "zone_management": false, 00:17:01.709 "zone_append": false, 00:17:01.709 "compare": false, 00:17:01.709 "compare_and_write": false, 00:17:01.709 "abort": true, 00:17:01.709 "seek_hole": false, 00:17:01.709 "seek_data": false, 00:17:01.709 "copy": true, 00:17:01.709 "nvme_iov_md": false 00:17:01.709 }, 00:17:01.709 "memory_domains": [ 00:17:01.710 { 00:17:01.710 "dma_device_id": "system", 00:17:01.710 "dma_device_type": 1 00:17:01.710 }, 00:17:01.710 { 00:17:01.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.710 "dma_device_type": 2 00:17:01.710 } 00:17:01.710 ], 00:17:01.710 "driver_specific": {} 00:17:01.710 } 00:17:01.710 ] 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.710 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.967 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.967 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.967 "name": "Existed_Raid", 00:17:01.967 "uuid": "86a08bc6-307f-4a60-896e-4213a0826364", 00:17:01.967 "strip_size_kb": 0, 00:17:01.967 "state": "configuring", 00:17:01.967 "raid_level": "raid1", 00:17:01.967 "superblock": true, 00:17:01.967 "num_base_bdevs": 3, 00:17:01.967 "num_base_bdevs_discovered": 1, 00:17:01.967 "num_base_bdevs_operational": 3, 00:17:01.967 "base_bdevs_list": [ 00:17:01.967 { 00:17:01.967 "name": "BaseBdev1", 00:17:01.967 "uuid": "63b36827-20eb-42d6-9833-be4f62264345", 00:17:01.967 "is_configured": true, 00:17:01.967 "data_offset": 2048, 00:17:01.967 "data_size": 63488 00:17:01.967 }, 00:17:01.967 { 00:17:01.967 "name": "BaseBdev2", 00:17:01.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.967 "is_configured": false, 00:17:01.967 "data_offset": 0, 00:17:01.967 "data_size": 0 00:17:01.967 }, 00:17:01.967 { 00:17:01.967 "name": "BaseBdev3", 00:17:01.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.967 "is_configured": false, 00:17:01.967 "data_offset": 0, 00:17:01.967 "data_size": 0 00:17:01.967 } 00:17:01.967 ] 00:17:01.967 }' 00:17:01.967 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.967 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.225 [2024-11-06 09:10:01.129821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.225 [2024-11-06 09:10:01.129927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.225 [2024-11-06 09:10:01.141858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.225 [2024-11-06 09:10:01.144443] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.225 [2024-11-06 09:10:01.144503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.225 [2024-11-06 09:10:01.144518] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.225 [2024-11-06 09:10:01.144533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.225 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.226 "name": "Existed_Raid", 00:17:02.226 "uuid": "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1", 00:17:02.226 "strip_size_kb": 0, 00:17:02.226 "state": "configuring", 00:17:02.226 "raid_level": "raid1", 00:17:02.226 "superblock": true, 00:17:02.226 "num_base_bdevs": 3, 00:17:02.226 "num_base_bdevs_discovered": 1, 00:17:02.226 "num_base_bdevs_operational": 3, 00:17:02.226 "base_bdevs_list": [ 00:17:02.226 { 00:17:02.226 "name": "BaseBdev1", 00:17:02.226 "uuid": "63b36827-20eb-42d6-9833-be4f62264345", 00:17:02.226 "is_configured": true, 00:17:02.226 "data_offset": 2048, 00:17:02.226 "data_size": 63488 00:17:02.226 }, 00:17:02.226 { 00:17:02.226 "name": "BaseBdev2", 00:17:02.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.226 "is_configured": false, 00:17:02.226 "data_offset": 0, 00:17:02.226 "data_size": 0 00:17:02.226 }, 00:17:02.226 { 00:17:02.226 "name": "BaseBdev3", 00:17:02.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.226 "is_configured": false, 00:17:02.226 "data_offset": 0, 00:17:02.226 "data_size": 0 00:17:02.226 } 00:17:02.226 ] 00:17:02.226 }' 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.226 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 [2024-11-06 09:10:01.620603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.819 BaseBdev2 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 [ 00:17:02.819 { 00:17:02.819 "name": "BaseBdev2", 00:17:02.819 "aliases": [ 00:17:02.819 "e2898cf1-d57a-4345-9496-1d294a878d7d" 00:17:02.819 ], 00:17:02.819 "product_name": "Malloc disk", 00:17:02.819 "block_size": 512, 00:17:02.819 "num_blocks": 65536, 00:17:02.819 "uuid": "e2898cf1-d57a-4345-9496-1d294a878d7d", 00:17:02.819 "assigned_rate_limits": { 00:17:02.819 "rw_ios_per_sec": 0, 00:17:02.819 "rw_mbytes_per_sec": 0, 00:17:02.819 "r_mbytes_per_sec": 0, 00:17:02.819 "w_mbytes_per_sec": 0 00:17:02.819 }, 00:17:02.819 "claimed": true, 00:17:02.819 "claim_type": "exclusive_write", 00:17:02.819 "zoned": false, 00:17:02.819 "supported_io_types": { 00:17:02.819 "read": true, 00:17:02.819 "write": true, 00:17:02.819 "unmap": true, 00:17:02.819 "flush": true, 00:17:02.819 "reset": true, 00:17:02.819 "nvme_admin": false, 00:17:02.819 "nvme_io": false, 00:17:02.819 "nvme_io_md": false, 00:17:02.819 "write_zeroes": true, 00:17:02.819 "zcopy": true, 00:17:02.819 "get_zone_info": false, 00:17:02.819 "zone_management": false, 00:17:02.819 "zone_append": false, 00:17:02.819 "compare": false, 00:17:02.819 "compare_and_write": false, 00:17:02.819 "abort": true, 00:17:02.819 "seek_hole": false, 00:17:02.819 "seek_data": false, 00:17:02.819 "copy": true, 00:17:02.819 "nvme_iov_md": false 00:17:02.819 }, 00:17:02.819 "memory_domains": [ 00:17:02.819 { 00:17:02.819 "dma_device_id": "system", 00:17:02.819 "dma_device_type": 1 00:17:02.819 }, 00:17:02.819 { 00:17:02.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.819 "dma_device_type": 2 00:17:02.819 } 00:17:02.819 ], 00:17:02.819 "driver_specific": {} 00:17:02.819 } 00:17:02.819 ] 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.819 "name": "Existed_Raid", 00:17:02.819 "uuid": "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1", 00:17:02.819 "strip_size_kb": 0, 00:17:02.819 "state": "configuring", 00:17:02.819 "raid_level": "raid1", 00:17:02.819 "superblock": true, 00:17:02.819 "num_base_bdevs": 3, 00:17:02.819 "num_base_bdevs_discovered": 2, 00:17:02.819 "num_base_bdevs_operational": 3, 00:17:02.819 "base_bdevs_list": [ 00:17:02.819 { 00:17:02.819 "name": "BaseBdev1", 00:17:02.819 "uuid": "63b36827-20eb-42d6-9833-be4f62264345", 00:17:02.819 "is_configured": true, 00:17:02.819 "data_offset": 2048, 00:17:02.819 "data_size": 63488 00:17:02.819 }, 00:17:02.819 { 00:17:02.819 "name": "BaseBdev2", 00:17:02.819 "uuid": "e2898cf1-d57a-4345-9496-1d294a878d7d", 00:17:02.819 "is_configured": true, 00:17:02.819 "data_offset": 2048, 00:17:02.819 "data_size": 63488 00:17:02.819 }, 00:17:02.819 { 00:17:02.819 "name": "BaseBdev3", 00:17:02.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.819 "is_configured": false, 00:17:02.819 "data_offset": 0, 00:17:02.819 "data_size": 0 00:17:02.819 } 00:17:02.819 ] 00:17:02.819 }' 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.819 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.078 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:03.078 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.078 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.338 [2024-11-06 09:10:02.173008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.338 [2024-11-06 09:10:02.173968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.338 [2024-11-06 09:10:02.174013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:03.338 BaseBdev3 00:17:03.338 [2024-11-06 09:10:02.174418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:03.338 [2024-11-06 09:10:02.174619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.338 [2024-11-06 09:10:02.174632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:03.338 [2024-11-06 09:10:02.174820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.338 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.338 [ 00:17:03.338 { 00:17:03.338 "name": "BaseBdev3", 00:17:03.338 "aliases": [ 00:17:03.338 "c088190a-e169-4587-86b2-3629b1de169b" 00:17:03.338 ], 00:17:03.338 "product_name": "Malloc disk", 00:17:03.338 "block_size": 512, 00:17:03.338 "num_blocks": 65536, 00:17:03.338 "uuid": "c088190a-e169-4587-86b2-3629b1de169b", 00:17:03.338 "assigned_rate_limits": { 00:17:03.338 "rw_ios_per_sec": 0, 00:17:03.338 "rw_mbytes_per_sec": 0, 00:17:03.338 "r_mbytes_per_sec": 0, 00:17:03.338 "w_mbytes_per_sec": 0 00:17:03.338 }, 00:17:03.338 "claimed": true, 00:17:03.338 "claim_type": "exclusive_write", 00:17:03.338 "zoned": false, 00:17:03.338 "supported_io_types": { 00:17:03.338 "read": true, 00:17:03.338 "write": true, 00:17:03.338 "unmap": true, 00:17:03.338 "flush": true, 00:17:03.338 "reset": true, 00:17:03.338 "nvme_admin": false, 00:17:03.338 "nvme_io": false, 00:17:03.338 "nvme_io_md": false, 00:17:03.338 "write_zeroes": true, 00:17:03.338 "zcopy": true, 00:17:03.338 "get_zone_info": false, 00:17:03.338 "zone_management": false, 00:17:03.338 "zone_append": false, 00:17:03.338 "compare": false, 00:17:03.338 "compare_and_write": false, 00:17:03.338 "abort": true, 00:17:03.338 "seek_hole": false, 00:17:03.338 "seek_data": false, 00:17:03.338 "copy": true, 00:17:03.338 "nvme_iov_md": false 00:17:03.338 }, 00:17:03.338 "memory_domains": [ 00:17:03.338 { 00:17:03.338 "dma_device_id": "system", 00:17:03.338 "dma_device_type": 1 00:17:03.338 }, 00:17:03.338 { 00:17:03.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.338 "dma_device_type": 2 00:17:03.338 } 00:17:03.338 ], 00:17:03.338 "driver_specific": {} 00:17:03.339 } 00:17:03.339 ] 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.339 "name": "Existed_Raid", 00:17:03.339 "uuid": "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1", 00:17:03.339 "strip_size_kb": 0, 00:17:03.339 "state": "online", 00:17:03.339 "raid_level": "raid1", 00:17:03.339 "superblock": true, 00:17:03.339 "num_base_bdevs": 3, 00:17:03.339 "num_base_bdevs_discovered": 3, 00:17:03.339 "num_base_bdevs_operational": 3, 00:17:03.339 "base_bdevs_list": [ 00:17:03.339 { 00:17:03.339 "name": "BaseBdev1", 00:17:03.339 "uuid": "63b36827-20eb-42d6-9833-be4f62264345", 00:17:03.339 "is_configured": true, 00:17:03.339 "data_offset": 2048, 00:17:03.339 "data_size": 63488 00:17:03.339 }, 00:17:03.339 { 00:17:03.339 "name": "BaseBdev2", 00:17:03.339 "uuid": "e2898cf1-d57a-4345-9496-1d294a878d7d", 00:17:03.339 "is_configured": true, 00:17:03.339 "data_offset": 2048, 00:17:03.339 "data_size": 63488 00:17:03.339 }, 00:17:03.339 { 00:17:03.339 "name": "BaseBdev3", 00:17:03.339 "uuid": "c088190a-e169-4587-86b2-3629b1de169b", 00:17:03.339 "is_configured": true, 00:17:03.339 "data_offset": 2048, 00:17:03.339 "data_size": 63488 00:17:03.339 } 00:17:03.339 ] 00:17:03.339 }' 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.339 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.907 [2024-11-06 09:10:02.692898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.907 "name": "Existed_Raid", 00:17:03.907 "aliases": [ 00:17:03.907 "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1" 00:17:03.907 ], 00:17:03.907 "product_name": "Raid Volume", 00:17:03.907 "block_size": 512, 00:17:03.907 "num_blocks": 63488, 00:17:03.907 "uuid": "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1", 00:17:03.907 "assigned_rate_limits": { 00:17:03.907 "rw_ios_per_sec": 0, 00:17:03.907 "rw_mbytes_per_sec": 0, 00:17:03.907 "r_mbytes_per_sec": 0, 00:17:03.907 "w_mbytes_per_sec": 0 00:17:03.907 }, 00:17:03.907 "claimed": false, 00:17:03.907 "zoned": false, 00:17:03.907 "supported_io_types": { 00:17:03.907 "read": true, 00:17:03.907 "write": true, 00:17:03.907 "unmap": false, 00:17:03.907 "flush": false, 00:17:03.907 "reset": true, 00:17:03.907 "nvme_admin": false, 00:17:03.907 "nvme_io": false, 00:17:03.907 "nvme_io_md": false, 00:17:03.907 "write_zeroes": true, 00:17:03.907 "zcopy": false, 00:17:03.907 "get_zone_info": false, 00:17:03.907 "zone_management": false, 00:17:03.907 "zone_append": false, 00:17:03.907 "compare": false, 00:17:03.907 "compare_and_write": false, 00:17:03.907 "abort": false, 00:17:03.907 "seek_hole": false, 00:17:03.907 "seek_data": false, 00:17:03.907 "copy": false, 00:17:03.907 "nvme_iov_md": false 00:17:03.907 }, 00:17:03.907 "memory_domains": [ 00:17:03.907 { 00:17:03.907 "dma_device_id": "system", 00:17:03.907 "dma_device_type": 1 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.907 "dma_device_type": 2 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "dma_device_id": "system", 00:17:03.907 "dma_device_type": 1 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.907 "dma_device_type": 2 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "dma_device_id": "system", 00:17:03.907 "dma_device_type": 1 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.907 "dma_device_type": 2 00:17:03.907 } 00:17:03.907 ], 00:17:03.907 "driver_specific": { 00:17:03.907 "raid": { 00:17:03.907 "uuid": "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1", 00:17:03.907 "strip_size_kb": 0, 00:17:03.907 "state": "online", 00:17:03.907 "raid_level": "raid1", 00:17:03.907 "superblock": true, 00:17:03.907 "num_base_bdevs": 3, 00:17:03.907 "num_base_bdevs_discovered": 3, 00:17:03.907 "num_base_bdevs_operational": 3, 00:17:03.907 "base_bdevs_list": [ 00:17:03.907 { 00:17:03.907 "name": "BaseBdev1", 00:17:03.907 "uuid": "63b36827-20eb-42d6-9833-be4f62264345", 00:17:03.907 "is_configured": true, 00:17:03.907 "data_offset": 2048, 00:17:03.907 "data_size": 63488 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "name": "BaseBdev2", 00:17:03.907 "uuid": "e2898cf1-d57a-4345-9496-1d294a878d7d", 00:17:03.907 "is_configured": true, 00:17:03.907 "data_offset": 2048, 00:17:03.907 "data_size": 63488 00:17:03.907 }, 00:17:03.907 { 00:17:03.907 "name": "BaseBdev3", 00:17:03.907 "uuid": "c088190a-e169-4587-86b2-3629b1de169b", 00:17:03.907 "is_configured": true, 00:17:03.907 "data_offset": 2048, 00:17:03.907 "data_size": 63488 00:17:03.907 } 00:17:03.907 ] 00:17:03.907 } 00:17:03.907 } 00:17:03.907 }' 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.907 BaseBdev2 00:17:03.907 BaseBdev3' 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.907 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.166 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.166 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.166 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:04.166 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.166 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.166 [2024-11-06 09:10:02.960263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.166 "name": "Existed_Raid", 00:17:04.166 "uuid": "af1fa7f8-5331-4fb1-8d61-3d4ea4462bc1", 00:17:04.166 "strip_size_kb": 0, 00:17:04.166 "state": "online", 00:17:04.166 "raid_level": "raid1", 00:17:04.166 "superblock": true, 00:17:04.166 "num_base_bdevs": 3, 00:17:04.166 "num_base_bdevs_discovered": 2, 00:17:04.166 "num_base_bdevs_operational": 2, 00:17:04.166 "base_bdevs_list": [ 00:17:04.166 { 00:17:04.166 "name": null, 00:17:04.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.166 "is_configured": false, 00:17:04.166 "data_offset": 0, 00:17:04.166 "data_size": 63488 00:17:04.166 }, 00:17:04.166 { 00:17:04.166 "name": "BaseBdev2", 00:17:04.166 "uuid": "e2898cf1-d57a-4345-9496-1d294a878d7d", 00:17:04.166 "is_configured": true, 00:17:04.166 "data_offset": 2048, 00:17:04.166 "data_size": 63488 00:17:04.166 }, 00:17:04.166 { 00:17:04.166 "name": "BaseBdev3", 00:17:04.166 "uuid": "c088190a-e169-4587-86b2-3629b1de169b", 00:17:04.166 "is_configured": true, 00:17:04.166 "data_offset": 2048, 00:17:04.166 "data_size": 63488 00:17:04.166 } 00:17:04.166 ] 00:17:04.166 }' 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.166 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.735 [2024-11-06 09:10:03.528615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.735 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.735 [2024-11-06 09:10:03.693173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.735 [2024-11-06 09:10:03.693365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.002 [2024-11-06 09:10:03.801647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.002 [2024-11-06 09:10:03.801992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.002 [2024-11-06 09:10:03.802123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.002 BaseBdev2 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.002 [ 00:17:05.002 { 00:17:05.002 "name": "BaseBdev2", 00:17:05.002 "aliases": [ 00:17:05.002 "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2" 00:17:05.002 ], 00:17:05.002 "product_name": "Malloc disk", 00:17:05.002 "block_size": 512, 00:17:05.002 "num_blocks": 65536, 00:17:05.002 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:05.002 "assigned_rate_limits": { 00:17:05.002 "rw_ios_per_sec": 0, 00:17:05.002 "rw_mbytes_per_sec": 0, 00:17:05.002 "r_mbytes_per_sec": 0, 00:17:05.002 "w_mbytes_per_sec": 0 00:17:05.002 }, 00:17:05.002 "claimed": false, 00:17:05.002 "zoned": false, 00:17:05.002 "supported_io_types": { 00:17:05.002 "read": true, 00:17:05.002 "write": true, 00:17:05.002 "unmap": true, 00:17:05.002 "flush": true, 00:17:05.002 "reset": true, 00:17:05.002 "nvme_admin": false, 00:17:05.002 "nvme_io": false, 00:17:05.002 "nvme_io_md": false, 00:17:05.002 "write_zeroes": true, 00:17:05.002 "zcopy": true, 00:17:05.002 "get_zone_info": false, 00:17:05.002 "zone_management": false, 00:17:05.002 "zone_append": false, 00:17:05.002 "compare": false, 00:17:05.002 "compare_and_write": false, 00:17:05.002 "abort": true, 00:17:05.002 "seek_hole": false, 00:17:05.002 "seek_data": false, 00:17:05.002 "copy": true, 00:17:05.002 "nvme_iov_md": false 00:17:05.002 }, 00:17:05.002 "memory_domains": [ 00:17:05.002 { 00:17:05.002 "dma_device_id": "system", 00:17:05.002 "dma_device_type": 1 00:17:05.002 }, 00:17:05.002 { 00:17:05.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.002 "dma_device_type": 2 00:17:05.002 } 00:17:05.002 ], 00:17:05.002 "driver_specific": {} 00:17:05.002 } 00:17:05.002 ] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.002 BaseBdev3 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:05.002 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.003 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.003 [ 00:17:05.003 { 00:17:05.003 "name": "BaseBdev3", 00:17:05.003 "aliases": [ 00:17:05.003 "1a0f9783-cc7a-468a-8cea-bb589bc6c17a" 00:17:05.003 ], 00:17:05.003 "product_name": "Malloc disk", 00:17:05.003 "block_size": 512, 00:17:05.003 "num_blocks": 65536, 00:17:05.003 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:05.003 "assigned_rate_limits": { 00:17:05.003 "rw_ios_per_sec": 0, 00:17:05.003 "rw_mbytes_per_sec": 0, 00:17:05.003 "r_mbytes_per_sec": 0, 00:17:05.003 "w_mbytes_per_sec": 0 00:17:05.003 }, 00:17:05.003 "claimed": false, 00:17:05.003 "zoned": false, 00:17:05.003 "supported_io_types": { 00:17:05.003 "read": true, 00:17:05.003 "write": true, 00:17:05.003 "unmap": true, 00:17:05.003 "flush": true, 00:17:05.003 "reset": true, 00:17:05.003 "nvme_admin": false, 00:17:05.003 "nvme_io": false, 00:17:05.003 "nvme_io_md": false, 00:17:05.003 "write_zeroes": true, 00:17:05.003 "zcopy": true, 00:17:05.003 "get_zone_info": false, 00:17:05.003 "zone_management": false, 00:17:05.003 "zone_append": false, 00:17:05.003 "compare": false, 00:17:05.003 "compare_and_write": false, 00:17:05.003 "abort": true, 00:17:05.003 "seek_hole": false, 00:17:05.003 "seek_data": false, 00:17:05.003 "copy": true, 00:17:05.003 "nvme_iov_md": false 00:17:05.003 }, 00:17:05.003 "memory_domains": [ 00:17:05.003 { 00:17:05.003 "dma_device_id": "system", 00:17:05.003 "dma_device_type": 1 00:17:05.003 }, 00:17:05.003 { 00:17:05.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.003 "dma_device_type": 2 00:17:05.003 } 00:17:05.003 ], 00:17:05.003 "driver_specific": {} 00:17:05.003 } 00:17:05.003 ] 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.003 [2024-11-06 09:10:04.030784] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.003 [2024-11-06 09:10:04.030865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.003 [2024-11-06 09:10:04.030892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.003 [2024-11-06 09:10:04.033423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.003 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.261 "name": "Existed_Raid", 00:17:05.261 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:05.261 "strip_size_kb": 0, 00:17:05.261 "state": "configuring", 00:17:05.261 "raid_level": "raid1", 00:17:05.261 "superblock": true, 00:17:05.261 "num_base_bdevs": 3, 00:17:05.261 "num_base_bdevs_discovered": 2, 00:17:05.261 "num_base_bdevs_operational": 3, 00:17:05.261 "base_bdevs_list": [ 00:17:05.261 { 00:17:05.261 "name": "BaseBdev1", 00:17:05.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.261 "is_configured": false, 00:17:05.261 "data_offset": 0, 00:17:05.261 "data_size": 0 00:17:05.261 }, 00:17:05.261 { 00:17:05.261 "name": "BaseBdev2", 00:17:05.261 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:05.261 "is_configured": true, 00:17:05.261 "data_offset": 2048, 00:17:05.261 "data_size": 63488 00:17:05.261 }, 00:17:05.261 { 00:17:05.261 "name": "BaseBdev3", 00:17:05.261 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:05.261 "is_configured": true, 00:17:05.261 "data_offset": 2048, 00:17:05.261 "data_size": 63488 00:17:05.261 } 00:17:05.261 ] 00:17:05.261 }' 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.261 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.521 [2024-11-06 09:10:04.422582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.521 "name": "Existed_Raid", 00:17:05.521 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:05.521 "strip_size_kb": 0, 00:17:05.521 "state": "configuring", 00:17:05.521 "raid_level": "raid1", 00:17:05.521 "superblock": true, 00:17:05.521 "num_base_bdevs": 3, 00:17:05.521 "num_base_bdevs_discovered": 1, 00:17:05.521 "num_base_bdevs_operational": 3, 00:17:05.521 "base_bdevs_list": [ 00:17:05.521 { 00:17:05.521 "name": "BaseBdev1", 00:17:05.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.521 "is_configured": false, 00:17:05.521 "data_offset": 0, 00:17:05.521 "data_size": 0 00:17:05.521 }, 00:17:05.521 { 00:17:05.521 "name": null, 00:17:05.521 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:05.521 "is_configured": false, 00:17:05.521 "data_offset": 0, 00:17:05.521 "data_size": 63488 00:17:05.521 }, 00:17:05.521 { 00:17:05.521 "name": "BaseBdev3", 00:17:05.521 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:05.521 "is_configured": true, 00:17:05.521 "data_offset": 2048, 00:17:05.521 "data_size": 63488 00:17:05.521 } 00:17:05.521 ] 00:17:05.521 }' 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.521 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.089 [2024-11-06 09:10:04.931901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.089 BaseBdev1 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:06.089 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.090 [ 00:17:06.090 { 00:17:06.090 "name": "BaseBdev1", 00:17:06.090 "aliases": [ 00:17:06.090 "a1a39d16-e96c-4dbc-aa16-7c9637d51c30" 00:17:06.090 ], 00:17:06.090 "product_name": "Malloc disk", 00:17:06.090 "block_size": 512, 00:17:06.090 "num_blocks": 65536, 00:17:06.090 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:06.090 "assigned_rate_limits": { 00:17:06.090 "rw_ios_per_sec": 0, 00:17:06.090 "rw_mbytes_per_sec": 0, 00:17:06.090 "r_mbytes_per_sec": 0, 00:17:06.090 "w_mbytes_per_sec": 0 00:17:06.090 }, 00:17:06.090 "claimed": true, 00:17:06.090 "claim_type": "exclusive_write", 00:17:06.090 "zoned": false, 00:17:06.090 "supported_io_types": { 00:17:06.090 "read": true, 00:17:06.090 "write": true, 00:17:06.090 "unmap": true, 00:17:06.090 "flush": true, 00:17:06.090 "reset": true, 00:17:06.090 "nvme_admin": false, 00:17:06.090 "nvme_io": false, 00:17:06.090 "nvme_io_md": false, 00:17:06.090 "write_zeroes": true, 00:17:06.090 "zcopy": true, 00:17:06.090 "get_zone_info": false, 00:17:06.090 "zone_management": false, 00:17:06.090 "zone_append": false, 00:17:06.090 "compare": false, 00:17:06.090 "compare_and_write": false, 00:17:06.090 "abort": true, 00:17:06.090 "seek_hole": false, 00:17:06.090 "seek_data": false, 00:17:06.090 "copy": true, 00:17:06.090 "nvme_iov_md": false 00:17:06.090 }, 00:17:06.090 "memory_domains": [ 00:17:06.090 { 00:17:06.090 "dma_device_id": "system", 00:17:06.090 "dma_device_type": 1 00:17:06.090 }, 00:17:06.090 { 00:17:06.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.090 "dma_device_type": 2 00:17:06.090 } 00:17:06.090 ], 00:17:06.090 "driver_specific": {} 00:17:06.090 } 00:17:06.090 ] 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.090 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.090 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.090 "name": "Existed_Raid", 00:17:06.090 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:06.090 "strip_size_kb": 0, 00:17:06.090 "state": "configuring", 00:17:06.090 "raid_level": "raid1", 00:17:06.090 "superblock": true, 00:17:06.090 "num_base_bdevs": 3, 00:17:06.090 "num_base_bdevs_discovered": 2, 00:17:06.090 "num_base_bdevs_operational": 3, 00:17:06.090 "base_bdevs_list": [ 00:17:06.090 { 00:17:06.090 "name": "BaseBdev1", 00:17:06.090 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:06.090 "is_configured": true, 00:17:06.090 "data_offset": 2048, 00:17:06.090 "data_size": 63488 00:17:06.090 }, 00:17:06.090 { 00:17:06.090 "name": null, 00:17:06.090 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:06.090 "is_configured": false, 00:17:06.090 "data_offset": 0, 00:17:06.090 "data_size": 63488 00:17:06.090 }, 00:17:06.090 { 00:17:06.090 "name": "BaseBdev3", 00:17:06.090 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:06.090 "is_configured": true, 00:17:06.090 "data_offset": 2048, 00:17:06.090 "data_size": 63488 00:17:06.090 } 00:17:06.090 ] 00:17:06.090 }' 00:17:06.090 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.090 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 [2024-11-06 09:10:05.439496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.659 "name": "Existed_Raid", 00:17:06.659 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:06.659 "strip_size_kb": 0, 00:17:06.659 "state": "configuring", 00:17:06.659 "raid_level": "raid1", 00:17:06.659 "superblock": true, 00:17:06.659 "num_base_bdevs": 3, 00:17:06.659 "num_base_bdevs_discovered": 1, 00:17:06.659 "num_base_bdevs_operational": 3, 00:17:06.659 "base_bdevs_list": [ 00:17:06.659 { 00:17:06.659 "name": "BaseBdev1", 00:17:06.659 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:06.659 "is_configured": true, 00:17:06.659 "data_offset": 2048, 00:17:06.659 "data_size": 63488 00:17:06.659 }, 00:17:06.659 { 00:17:06.659 "name": null, 00:17:06.659 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:06.659 "is_configured": false, 00:17:06.659 "data_offset": 0, 00:17:06.659 "data_size": 63488 00:17:06.659 }, 00:17:06.659 { 00:17:06.659 "name": null, 00:17:06.659 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:06.659 "is_configured": false, 00:17:06.659 "data_offset": 0, 00:17:06.659 "data_size": 63488 00:17:06.659 } 00:17:06.659 ] 00:17:06.659 }' 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.659 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.919 [2024-11-06 09:10:05.886959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.919 "name": "Existed_Raid", 00:17:06.919 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:06.919 "strip_size_kb": 0, 00:17:06.919 "state": "configuring", 00:17:06.919 "raid_level": "raid1", 00:17:06.919 "superblock": true, 00:17:06.919 "num_base_bdevs": 3, 00:17:06.919 "num_base_bdevs_discovered": 2, 00:17:06.919 "num_base_bdevs_operational": 3, 00:17:06.919 "base_bdevs_list": [ 00:17:06.919 { 00:17:06.919 "name": "BaseBdev1", 00:17:06.919 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:06.919 "is_configured": true, 00:17:06.919 "data_offset": 2048, 00:17:06.919 "data_size": 63488 00:17:06.919 }, 00:17:06.919 { 00:17:06.919 "name": null, 00:17:06.919 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:06.919 "is_configured": false, 00:17:06.919 "data_offset": 0, 00:17:06.919 "data_size": 63488 00:17:06.919 }, 00:17:06.919 { 00:17:06.919 "name": "BaseBdev3", 00:17:06.919 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:06.919 "is_configured": true, 00:17:06.919 "data_offset": 2048, 00:17:06.919 "data_size": 63488 00:17:06.919 } 00:17:06.919 ] 00:17:06.919 }' 00:17:06.919 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.920 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.487 [2024-11-06 09:10:06.354505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.487 "name": "Existed_Raid", 00:17:07.487 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:07.487 "strip_size_kb": 0, 00:17:07.487 "state": "configuring", 00:17:07.487 "raid_level": "raid1", 00:17:07.487 "superblock": true, 00:17:07.487 "num_base_bdevs": 3, 00:17:07.487 "num_base_bdevs_discovered": 1, 00:17:07.487 "num_base_bdevs_operational": 3, 00:17:07.487 "base_bdevs_list": [ 00:17:07.487 { 00:17:07.487 "name": null, 00:17:07.487 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:07.487 "is_configured": false, 00:17:07.487 "data_offset": 0, 00:17:07.487 "data_size": 63488 00:17:07.487 }, 00:17:07.487 { 00:17:07.487 "name": null, 00:17:07.487 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:07.487 "is_configured": false, 00:17:07.487 "data_offset": 0, 00:17:07.487 "data_size": 63488 00:17:07.487 }, 00:17:07.487 { 00:17:07.487 "name": "BaseBdev3", 00:17:07.487 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:07.487 "is_configured": true, 00:17:07.487 "data_offset": 2048, 00:17:07.487 "data_size": 63488 00:17:07.487 } 00:17:07.487 ] 00:17:07.487 }' 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.487 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.056 [2024-11-06 09:10:06.908535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.056 "name": "Existed_Raid", 00:17:08.056 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:08.056 "strip_size_kb": 0, 00:17:08.056 "state": "configuring", 00:17:08.056 "raid_level": "raid1", 00:17:08.056 "superblock": true, 00:17:08.056 "num_base_bdevs": 3, 00:17:08.056 "num_base_bdevs_discovered": 2, 00:17:08.056 "num_base_bdevs_operational": 3, 00:17:08.056 "base_bdevs_list": [ 00:17:08.056 { 00:17:08.056 "name": null, 00:17:08.056 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:08.056 "is_configured": false, 00:17:08.056 "data_offset": 0, 00:17:08.056 "data_size": 63488 00:17:08.056 }, 00:17:08.056 { 00:17:08.056 "name": "BaseBdev2", 00:17:08.056 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:08.056 "is_configured": true, 00:17:08.056 "data_offset": 2048, 00:17:08.056 "data_size": 63488 00:17:08.056 }, 00:17:08.056 { 00:17:08.056 "name": "BaseBdev3", 00:17:08.056 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:08.056 "is_configured": true, 00:17:08.056 "data_offset": 2048, 00:17:08.056 "data_size": 63488 00:17:08.056 } 00:17:08.056 ] 00:17:08.056 }' 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.056 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.315 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:08.315 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.315 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.315 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.315 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a1a39d16-e96c-4dbc-aa16-7c9637d51c30 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.574 [2024-11-06 09:10:07.446506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:08.574 [2024-11-06 09:10:07.446829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:08.574 [2024-11-06 09:10:07.446847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:08.574 [2024-11-06 09:10:07.447175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:08.574 [2024-11-06 09:10:07.447393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:08.574 [2024-11-06 09:10:07.447412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:08.574 [2024-11-06 09:10:07.447574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.574 NewBaseBdev 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.574 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.574 [ 00:17:08.574 { 00:17:08.574 "name": "NewBaseBdev", 00:17:08.574 "aliases": [ 00:17:08.574 "a1a39d16-e96c-4dbc-aa16-7c9637d51c30" 00:17:08.574 ], 00:17:08.574 "product_name": "Malloc disk", 00:17:08.574 "block_size": 512, 00:17:08.574 "num_blocks": 65536, 00:17:08.574 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:08.574 "assigned_rate_limits": { 00:17:08.574 "rw_ios_per_sec": 0, 00:17:08.574 "rw_mbytes_per_sec": 0, 00:17:08.574 "r_mbytes_per_sec": 0, 00:17:08.574 "w_mbytes_per_sec": 0 00:17:08.574 }, 00:17:08.574 "claimed": true, 00:17:08.574 "claim_type": "exclusive_write", 00:17:08.574 "zoned": false, 00:17:08.574 "supported_io_types": { 00:17:08.574 "read": true, 00:17:08.574 "write": true, 00:17:08.574 "unmap": true, 00:17:08.574 "flush": true, 00:17:08.574 "reset": true, 00:17:08.574 "nvme_admin": false, 00:17:08.574 "nvme_io": false, 00:17:08.575 "nvme_io_md": false, 00:17:08.575 "write_zeroes": true, 00:17:08.575 "zcopy": true, 00:17:08.575 "get_zone_info": false, 00:17:08.575 "zone_management": false, 00:17:08.575 "zone_append": false, 00:17:08.575 "compare": false, 00:17:08.575 "compare_and_write": false, 00:17:08.575 "abort": true, 00:17:08.575 "seek_hole": false, 00:17:08.575 "seek_data": false, 00:17:08.575 "copy": true, 00:17:08.575 "nvme_iov_md": false 00:17:08.575 }, 00:17:08.575 "memory_domains": [ 00:17:08.575 { 00:17:08.575 "dma_device_id": "system", 00:17:08.575 "dma_device_type": 1 00:17:08.575 }, 00:17:08.575 { 00:17:08.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.575 "dma_device_type": 2 00:17:08.575 } 00:17:08.575 ], 00:17:08.575 "driver_specific": {} 00:17:08.575 } 00:17:08.575 ] 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.575 "name": "Existed_Raid", 00:17:08.575 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:08.575 "strip_size_kb": 0, 00:17:08.575 "state": "online", 00:17:08.575 "raid_level": "raid1", 00:17:08.575 "superblock": true, 00:17:08.575 "num_base_bdevs": 3, 00:17:08.575 "num_base_bdevs_discovered": 3, 00:17:08.575 "num_base_bdevs_operational": 3, 00:17:08.575 "base_bdevs_list": [ 00:17:08.575 { 00:17:08.575 "name": "NewBaseBdev", 00:17:08.575 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:08.575 "is_configured": true, 00:17:08.575 "data_offset": 2048, 00:17:08.575 "data_size": 63488 00:17:08.575 }, 00:17:08.575 { 00:17:08.575 "name": "BaseBdev2", 00:17:08.575 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:08.575 "is_configured": true, 00:17:08.575 "data_offset": 2048, 00:17:08.575 "data_size": 63488 00:17:08.575 }, 00:17:08.575 { 00:17:08.575 "name": "BaseBdev3", 00:17:08.575 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:08.575 "is_configured": true, 00:17:08.575 "data_offset": 2048, 00:17:08.575 "data_size": 63488 00:17:08.575 } 00:17:08.575 ] 00:17:08.575 }' 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.575 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.144 [2024-11-06 09:10:07.902485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.144 09:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.145 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.145 "name": "Existed_Raid", 00:17:09.145 "aliases": [ 00:17:09.145 "c6d9402d-4f35-460c-a8e1-df998e088804" 00:17:09.145 ], 00:17:09.145 "product_name": "Raid Volume", 00:17:09.145 "block_size": 512, 00:17:09.145 "num_blocks": 63488, 00:17:09.145 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:09.145 "assigned_rate_limits": { 00:17:09.145 "rw_ios_per_sec": 0, 00:17:09.145 "rw_mbytes_per_sec": 0, 00:17:09.145 "r_mbytes_per_sec": 0, 00:17:09.145 "w_mbytes_per_sec": 0 00:17:09.145 }, 00:17:09.145 "claimed": false, 00:17:09.145 "zoned": false, 00:17:09.145 "supported_io_types": { 00:17:09.145 "read": true, 00:17:09.145 "write": true, 00:17:09.145 "unmap": false, 00:17:09.145 "flush": false, 00:17:09.145 "reset": true, 00:17:09.145 "nvme_admin": false, 00:17:09.145 "nvme_io": false, 00:17:09.145 "nvme_io_md": false, 00:17:09.145 "write_zeroes": true, 00:17:09.145 "zcopy": false, 00:17:09.145 "get_zone_info": false, 00:17:09.145 "zone_management": false, 00:17:09.145 "zone_append": false, 00:17:09.145 "compare": false, 00:17:09.145 "compare_and_write": false, 00:17:09.145 "abort": false, 00:17:09.145 "seek_hole": false, 00:17:09.145 "seek_data": false, 00:17:09.145 "copy": false, 00:17:09.145 "nvme_iov_md": false 00:17:09.145 }, 00:17:09.145 "memory_domains": [ 00:17:09.145 { 00:17:09.145 "dma_device_id": "system", 00:17:09.145 "dma_device_type": 1 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.145 "dma_device_type": 2 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "dma_device_id": "system", 00:17:09.145 "dma_device_type": 1 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.145 "dma_device_type": 2 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "dma_device_id": "system", 00:17:09.145 "dma_device_type": 1 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.145 "dma_device_type": 2 00:17:09.145 } 00:17:09.145 ], 00:17:09.145 "driver_specific": { 00:17:09.145 "raid": { 00:17:09.145 "uuid": "c6d9402d-4f35-460c-a8e1-df998e088804", 00:17:09.145 "strip_size_kb": 0, 00:17:09.145 "state": "online", 00:17:09.145 "raid_level": "raid1", 00:17:09.145 "superblock": true, 00:17:09.145 "num_base_bdevs": 3, 00:17:09.145 "num_base_bdevs_discovered": 3, 00:17:09.145 "num_base_bdevs_operational": 3, 00:17:09.145 "base_bdevs_list": [ 00:17:09.145 { 00:17:09.145 "name": "NewBaseBdev", 00:17:09.145 "uuid": "a1a39d16-e96c-4dbc-aa16-7c9637d51c30", 00:17:09.145 "is_configured": true, 00:17:09.145 "data_offset": 2048, 00:17:09.145 "data_size": 63488 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "name": "BaseBdev2", 00:17:09.145 "uuid": "5c17d7bf-7ddc-4485-b877-8d9d36f1c7f2", 00:17:09.145 "is_configured": true, 00:17:09.145 "data_offset": 2048, 00:17:09.145 "data_size": 63488 00:17:09.145 }, 00:17:09.145 { 00:17:09.145 "name": "BaseBdev3", 00:17:09.145 "uuid": "1a0f9783-cc7a-468a-8cea-bb589bc6c17a", 00:17:09.145 "is_configured": true, 00:17:09.145 "data_offset": 2048, 00:17:09.145 "data_size": 63488 00:17:09.145 } 00:17:09.145 ] 00:17:09.145 } 00:17:09.145 } 00:17:09.145 }' 00:17:09.145 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.145 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:09.145 BaseBdev2 00:17:09.145 BaseBdev3' 00:17:09.145 09:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.145 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.145 [2024-11-06 09:10:08.177825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.145 [2024-11-06 09:10:08.177899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.145 [2024-11-06 09:10:08.178018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.145 [2024-11-06 09:10:08.178398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.145 [2024-11-06 09:10:08.178425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67801 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 67801 ']' 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 67801 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67801 00:17:09.406 killing process with pid 67801 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67801' 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 67801 00:17:09.406 [2024-11-06 09:10:08.216367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.406 09:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 67801 00:17:09.664 [2024-11-06 09:10:08.552741] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.039 09:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:11.039 00:17:11.039 real 0m10.552s 00:17:11.039 user 0m16.480s 00:17:11.039 sys 0m2.099s 00:17:11.039 09:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.039 ************************************ 00:17:11.039 END TEST raid_state_function_test_sb 00:17:11.039 ************************************ 00:17:11.039 09:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.039 09:10:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:11.039 09:10:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:11.039 09:10:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.039 09:10:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.040 ************************************ 00:17:11.040 START TEST raid_superblock_test 00:17:11.040 ************************************ 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68421 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68421 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68421 ']' 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.040 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.040 [2024-11-06 09:10:09.974792] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:11.040 [2024-11-06 09:10:09.974927] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68421 ] 00:17:11.298 [2024-11-06 09:10:10.155674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.298 [2024-11-06 09:10:10.297977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.556 [2024-11-06 09:10:10.545058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.556 [2024-11-06 09:10:10.545132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.820 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:11.820 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:11.820 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:11.820 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:11.820 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 malloc1 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 [2024-11-06 09:10:10.912587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.088 [2024-11-06 09:10:10.912679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.088 [2024-11-06 09:10:10.912708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:12.088 [2024-11-06 09:10:10.912732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.088 [2024-11-06 09:10:10.915681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.088 [2024-11-06 09:10:10.915724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.088 pt1 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 malloc2 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 [2024-11-06 09:10:10.971638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.088 [2024-11-06 09:10:10.971702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.088 [2024-11-06 09:10:10.971734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:12.088 [2024-11-06 09:10:10.971749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.088 [2024-11-06 09:10:10.974588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.088 [2024-11-06 09:10:10.974628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.088 pt2 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 malloc3 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 [2024-11-06 09:10:11.044364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:12.088 [2024-11-06 09:10:11.044431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.088 [2024-11-06 09:10:11.044459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:12.088 [2024-11-06 09:10:11.044474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.088 [2024-11-06 09:10:11.047265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.088 [2024-11-06 09:10:11.047329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:12.088 pt3 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 [2024-11-06 09:10:11.052419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.088 [2024-11-06 09:10:11.055038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.088 [2024-11-06 09:10:11.055114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:12.088 [2024-11-06 09:10:11.055309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:12.088 [2024-11-06 09:10:11.055336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:12.088 [2024-11-06 09:10:11.055633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:12.088 [2024-11-06 09:10:11.055915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:12.088 [2024-11-06 09:10:11.055935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:12.088 [2024-11-06 09:10:11.056104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.088 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.089 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.089 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.089 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.089 "name": "raid_bdev1", 00:17:12.089 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:12.089 "strip_size_kb": 0, 00:17:12.089 "state": "online", 00:17:12.089 "raid_level": "raid1", 00:17:12.089 "superblock": true, 00:17:12.089 "num_base_bdevs": 3, 00:17:12.089 "num_base_bdevs_discovered": 3, 00:17:12.089 "num_base_bdevs_operational": 3, 00:17:12.089 "base_bdevs_list": [ 00:17:12.089 { 00:17:12.089 "name": "pt1", 00:17:12.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.089 "is_configured": true, 00:17:12.089 "data_offset": 2048, 00:17:12.089 "data_size": 63488 00:17:12.089 }, 00:17:12.089 { 00:17:12.089 "name": "pt2", 00:17:12.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.089 "is_configured": true, 00:17:12.089 "data_offset": 2048, 00:17:12.089 "data_size": 63488 00:17:12.089 }, 00:17:12.089 { 00:17:12.089 "name": "pt3", 00:17:12.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.089 "is_configured": true, 00:17:12.089 "data_offset": 2048, 00:17:12.089 "data_size": 63488 00:17:12.089 } 00:17:12.089 ] 00:17:12.089 }' 00:17:12.089 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.089 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.656 [2024-11-06 09:10:11.484183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:12.656 "name": "raid_bdev1", 00:17:12.656 "aliases": [ 00:17:12.656 "2bea6a9f-27de-4213-95e9-2cb824119169" 00:17:12.656 ], 00:17:12.656 "product_name": "Raid Volume", 00:17:12.656 "block_size": 512, 00:17:12.656 "num_blocks": 63488, 00:17:12.656 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:12.656 "assigned_rate_limits": { 00:17:12.656 "rw_ios_per_sec": 0, 00:17:12.656 "rw_mbytes_per_sec": 0, 00:17:12.656 "r_mbytes_per_sec": 0, 00:17:12.656 "w_mbytes_per_sec": 0 00:17:12.656 }, 00:17:12.656 "claimed": false, 00:17:12.656 "zoned": false, 00:17:12.656 "supported_io_types": { 00:17:12.656 "read": true, 00:17:12.656 "write": true, 00:17:12.656 "unmap": false, 00:17:12.656 "flush": false, 00:17:12.656 "reset": true, 00:17:12.656 "nvme_admin": false, 00:17:12.656 "nvme_io": false, 00:17:12.656 "nvme_io_md": false, 00:17:12.656 "write_zeroes": true, 00:17:12.656 "zcopy": false, 00:17:12.656 "get_zone_info": false, 00:17:12.656 "zone_management": false, 00:17:12.656 "zone_append": false, 00:17:12.656 "compare": false, 00:17:12.656 "compare_and_write": false, 00:17:12.656 "abort": false, 00:17:12.656 "seek_hole": false, 00:17:12.656 "seek_data": false, 00:17:12.656 "copy": false, 00:17:12.656 "nvme_iov_md": false 00:17:12.656 }, 00:17:12.656 "memory_domains": [ 00:17:12.656 { 00:17:12.656 "dma_device_id": "system", 00:17:12.656 "dma_device_type": 1 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.656 "dma_device_type": 2 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "dma_device_id": "system", 00:17:12.656 "dma_device_type": 1 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.656 "dma_device_type": 2 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "dma_device_id": "system", 00:17:12.656 "dma_device_type": 1 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.656 "dma_device_type": 2 00:17:12.656 } 00:17:12.656 ], 00:17:12.656 "driver_specific": { 00:17:12.656 "raid": { 00:17:12.656 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:12.656 "strip_size_kb": 0, 00:17:12.656 "state": "online", 00:17:12.656 "raid_level": "raid1", 00:17:12.656 "superblock": true, 00:17:12.656 "num_base_bdevs": 3, 00:17:12.656 "num_base_bdevs_discovered": 3, 00:17:12.656 "num_base_bdevs_operational": 3, 00:17:12.656 "base_bdevs_list": [ 00:17:12.656 { 00:17:12.656 "name": "pt1", 00:17:12.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.656 "is_configured": true, 00:17:12.656 "data_offset": 2048, 00:17:12.656 "data_size": 63488 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "name": "pt2", 00:17:12.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.656 "is_configured": true, 00:17:12.656 "data_offset": 2048, 00:17:12.656 "data_size": 63488 00:17:12.656 }, 00:17:12.656 { 00:17:12.656 "name": "pt3", 00:17:12.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.656 "is_configured": true, 00:17:12.656 "data_offset": 2048, 00:17:12.656 "data_size": 63488 00:17:12.656 } 00:17:12.656 ] 00:17:12.656 } 00:17:12.656 } 00:17:12.656 }' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:12.656 pt2 00:17:12.656 pt3' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.656 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.657 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:12.917 [2024-11-06 09:10:11.715707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2bea6a9f-27de-4213-95e9-2cb824119169 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2bea6a9f-27de-4213-95e9-2cb824119169 ']' 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.917 [2024-11-06 09:10:11.743417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.917 [2024-11-06 09:10:11.743460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.917 [2024-11-06 09:10:11.743566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.917 [2024-11-06 09:10:11.743661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.917 [2024-11-06 09:10:11.743675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:12.917 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.918 [2024-11-06 09:10:11.859546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:12.918 [2024-11-06 09:10:11.862230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:12.918 [2024-11-06 09:10:11.862551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:12.918 [2024-11-06 09:10:11.862639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:12.918 [2024-11-06 09:10:11.862719] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:12.918 [2024-11-06 09:10:11.862746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:12.918 [2024-11-06 09:10:11.862774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.918 [2024-11-06 09:10:11.862788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:12.918 request: 00:17:12.918 { 00:17:12.918 "name": "raid_bdev1", 00:17:12.918 "raid_level": "raid1", 00:17:12.918 "base_bdevs": [ 00:17:12.918 "malloc1", 00:17:12.918 "malloc2", 00:17:12.918 "malloc3" 00:17:12.918 ], 00:17:12.918 "superblock": false, 00:17:12.918 "method": "bdev_raid_create", 00:17:12.918 "req_id": 1 00:17:12.918 } 00:17:12.918 Got JSON-RPC error response 00:17:12.918 response: 00:17:12.918 { 00:17:12.918 "code": -17, 00:17:12.918 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:12.918 } 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.918 [2024-11-06 09:10:11.911492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.918 [2024-11-06 09:10:11.911601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.918 [2024-11-06 09:10:11.911640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:12.918 [2024-11-06 09:10:11.911656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.918 [2024-11-06 09:10:11.914683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.918 [2024-11-06 09:10:11.914736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.918 [2024-11-06 09:10:11.914861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:12.918 [2024-11-06 09:10:11.914936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.918 pt1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.918 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.177 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.177 "name": "raid_bdev1", 00:17:13.177 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:13.177 "strip_size_kb": 0, 00:17:13.177 "state": "configuring", 00:17:13.177 "raid_level": "raid1", 00:17:13.177 "superblock": true, 00:17:13.177 "num_base_bdevs": 3, 00:17:13.177 "num_base_bdevs_discovered": 1, 00:17:13.177 "num_base_bdevs_operational": 3, 00:17:13.177 "base_bdevs_list": [ 00:17:13.177 { 00:17:13.177 "name": "pt1", 00:17:13.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.177 "is_configured": true, 00:17:13.177 "data_offset": 2048, 00:17:13.177 "data_size": 63488 00:17:13.177 }, 00:17:13.177 { 00:17:13.177 "name": null, 00:17:13.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.177 "is_configured": false, 00:17:13.177 "data_offset": 2048, 00:17:13.177 "data_size": 63488 00:17:13.177 }, 00:17:13.177 { 00:17:13.177 "name": null, 00:17:13.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.177 "is_configured": false, 00:17:13.177 "data_offset": 2048, 00:17:13.177 "data_size": 63488 00:17:13.177 } 00:17:13.177 ] 00:17:13.177 }' 00:17:13.177 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.177 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.436 [2024-11-06 09:10:12.295519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.436 [2024-11-06 09:10:12.295840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.436 [2024-11-06 09:10:12.295970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:13.436 [2024-11-06 09:10:12.296069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.436 [2024-11-06 09:10:12.296779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.436 [2024-11-06 09:10:12.296935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.436 [2024-11-06 09:10:12.297176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:13.436 [2024-11-06 09:10:12.297337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.436 pt2 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.436 [2024-11-06 09:10:12.303475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.436 "name": "raid_bdev1", 00:17:13.436 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:13.436 "strip_size_kb": 0, 00:17:13.436 "state": "configuring", 00:17:13.436 "raid_level": "raid1", 00:17:13.436 "superblock": true, 00:17:13.436 "num_base_bdevs": 3, 00:17:13.436 "num_base_bdevs_discovered": 1, 00:17:13.436 "num_base_bdevs_operational": 3, 00:17:13.436 "base_bdevs_list": [ 00:17:13.436 { 00:17:13.436 "name": "pt1", 00:17:13.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.436 "is_configured": true, 00:17:13.436 "data_offset": 2048, 00:17:13.436 "data_size": 63488 00:17:13.436 }, 00:17:13.436 { 00:17:13.436 "name": null, 00:17:13.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.436 "is_configured": false, 00:17:13.436 "data_offset": 0, 00:17:13.436 "data_size": 63488 00:17:13.436 }, 00:17:13.436 { 00:17:13.436 "name": null, 00:17:13.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.436 "is_configured": false, 00:17:13.436 "data_offset": 2048, 00:17:13.436 "data_size": 63488 00:17:13.436 } 00:17:13.436 ] 00:17:13.436 }' 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.436 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 [2024-11-06 09:10:12.679230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.695 [2024-11-06 09:10:12.679376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.695 [2024-11-06 09:10:12.679407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:13.695 [2024-11-06 09:10:12.679427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.695 [2024-11-06 09:10:12.680092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.695 [2024-11-06 09:10:12.680133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.695 [2024-11-06 09:10:12.680268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:13.695 [2024-11-06 09:10:12.680351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.695 pt2 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 [2024-11-06 09:10:12.687206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.695 [2024-11-06 09:10:12.687322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.695 [2024-11-06 09:10:12.687359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:13.695 [2024-11-06 09:10:12.687380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.695 [2024-11-06 09:10:12.687999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.695 [2024-11-06 09:10:12.688034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.695 [2024-11-06 09:10:12.688165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:13.695 [2024-11-06 09:10:12.688212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:13.695 [2024-11-06 09:10:12.688413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:13.695 [2024-11-06 09:10:12.688436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:13.695 [2024-11-06 09:10:12.688761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:13.695 [2024-11-06 09:10:12.688971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:13.695 [2024-11-06 09:10:12.688994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:13.695 [2024-11-06 09:10:12.689184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.695 pt3 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.695 "name": "raid_bdev1", 00:17:13.695 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:13.695 "strip_size_kb": 0, 00:17:13.695 "state": "online", 00:17:13.695 "raid_level": "raid1", 00:17:13.695 "superblock": true, 00:17:13.695 "num_base_bdevs": 3, 00:17:13.695 "num_base_bdevs_discovered": 3, 00:17:13.695 "num_base_bdevs_operational": 3, 00:17:13.695 "base_bdevs_list": [ 00:17:13.695 { 00:17:13.695 "name": "pt1", 00:17:13.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.695 "is_configured": true, 00:17:13.695 "data_offset": 2048, 00:17:13.695 "data_size": 63488 00:17:13.695 }, 00:17:13.695 { 00:17:13.695 "name": "pt2", 00:17:13.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.695 "is_configured": true, 00:17:13.695 "data_offset": 2048, 00:17:13.695 "data_size": 63488 00:17:13.695 }, 00:17:13.695 { 00:17:13.695 "name": "pt3", 00:17:13.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.695 "is_configured": true, 00:17:13.695 "data_offset": 2048, 00:17:13.695 "data_size": 63488 00:17:13.695 } 00:17:13.695 ] 00:17:13.695 }' 00:17:13.695 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.696 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.281 [2024-11-06 09:10:13.075044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.281 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:14.281 "name": "raid_bdev1", 00:17:14.281 "aliases": [ 00:17:14.281 "2bea6a9f-27de-4213-95e9-2cb824119169" 00:17:14.281 ], 00:17:14.281 "product_name": "Raid Volume", 00:17:14.281 "block_size": 512, 00:17:14.281 "num_blocks": 63488, 00:17:14.281 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:14.281 "assigned_rate_limits": { 00:17:14.281 "rw_ios_per_sec": 0, 00:17:14.281 "rw_mbytes_per_sec": 0, 00:17:14.281 "r_mbytes_per_sec": 0, 00:17:14.281 "w_mbytes_per_sec": 0 00:17:14.281 }, 00:17:14.281 "claimed": false, 00:17:14.281 "zoned": false, 00:17:14.281 "supported_io_types": { 00:17:14.281 "read": true, 00:17:14.281 "write": true, 00:17:14.281 "unmap": false, 00:17:14.281 "flush": false, 00:17:14.281 "reset": true, 00:17:14.281 "nvme_admin": false, 00:17:14.281 "nvme_io": false, 00:17:14.281 "nvme_io_md": false, 00:17:14.281 "write_zeroes": true, 00:17:14.281 "zcopy": false, 00:17:14.281 "get_zone_info": false, 00:17:14.281 "zone_management": false, 00:17:14.281 "zone_append": false, 00:17:14.281 "compare": false, 00:17:14.281 "compare_and_write": false, 00:17:14.281 "abort": false, 00:17:14.281 "seek_hole": false, 00:17:14.281 "seek_data": false, 00:17:14.281 "copy": false, 00:17:14.281 "nvme_iov_md": false 00:17:14.281 }, 00:17:14.281 "memory_domains": [ 00:17:14.281 { 00:17:14.281 "dma_device_id": "system", 00:17:14.281 "dma_device_type": 1 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.281 "dma_device_type": 2 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "dma_device_id": "system", 00:17:14.281 "dma_device_type": 1 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.281 "dma_device_type": 2 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "dma_device_id": "system", 00:17:14.281 "dma_device_type": 1 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.281 "dma_device_type": 2 00:17:14.281 } 00:17:14.281 ], 00:17:14.281 "driver_specific": { 00:17:14.281 "raid": { 00:17:14.281 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:14.281 "strip_size_kb": 0, 00:17:14.281 "state": "online", 00:17:14.281 "raid_level": "raid1", 00:17:14.281 "superblock": true, 00:17:14.281 "num_base_bdevs": 3, 00:17:14.281 "num_base_bdevs_discovered": 3, 00:17:14.281 "num_base_bdevs_operational": 3, 00:17:14.281 "base_bdevs_list": [ 00:17:14.281 { 00:17:14.281 "name": "pt1", 00:17:14.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.281 "is_configured": true, 00:17:14.281 "data_offset": 2048, 00:17:14.281 "data_size": 63488 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "name": "pt2", 00:17:14.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.281 "is_configured": true, 00:17:14.281 "data_offset": 2048, 00:17:14.281 "data_size": 63488 00:17:14.281 }, 00:17:14.281 { 00:17:14.281 "name": "pt3", 00:17:14.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.281 "is_configured": true, 00:17:14.281 "data_offset": 2048, 00:17:14.281 "data_size": 63488 00:17:14.281 } 00:17:14.281 ] 00:17:14.281 } 00:17:14.281 } 00:17:14.281 }' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:14.282 pt2 00:17:14.282 pt3' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.282 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.541 [2024-11-06 09:10:13.334794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.541 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2bea6a9f-27de-4213-95e9-2cb824119169 '!=' 2bea6a9f-27de-4213-95e9-2cb824119169 ']' 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.542 [2024-11-06 09:10:13.374547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.542 "name": "raid_bdev1", 00:17:14.542 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:14.542 "strip_size_kb": 0, 00:17:14.542 "state": "online", 00:17:14.542 "raid_level": "raid1", 00:17:14.542 "superblock": true, 00:17:14.542 "num_base_bdevs": 3, 00:17:14.542 "num_base_bdevs_discovered": 2, 00:17:14.542 "num_base_bdevs_operational": 2, 00:17:14.542 "base_bdevs_list": [ 00:17:14.542 { 00:17:14.542 "name": null, 00:17:14.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.542 "is_configured": false, 00:17:14.542 "data_offset": 0, 00:17:14.542 "data_size": 63488 00:17:14.542 }, 00:17:14.542 { 00:17:14.542 "name": "pt2", 00:17:14.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.542 "is_configured": true, 00:17:14.542 "data_offset": 2048, 00:17:14.542 "data_size": 63488 00:17:14.542 }, 00:17:14.542 { 00:17:14.542 "name": "pt3", 00:17:14.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.542 "is_configured": true, 00:17:14.542 "data_offset": 2048, 00:17:14.542 "data_size": 63488 00:17:14.542 } 00:17:14.542 ] 00:17:14.542 }' 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.542 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.801 [2024-11-06 09:10:13.817977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.801 [2024-11-06 09:10:13.818245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.801 [2024-11-06 09:10:13.818424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.801 [2024-11-06 09:10:13.818507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.801 [2024-11-06 09:10:13.818532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.801 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.060 [2024-11-06 09:10:13.897758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.060 [2024-11-06 09:10:13.897842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.060 [2024-11-06 09:10:13.897868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:15.060 [2024-11-06 09:10:13.897887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.060 [2024-11-06 09:10:13.905335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.060 [2024-11-06 09:10:13.905467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.060 [2024-11-06 09:10:13.905742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.060 [2024-11-06 09:10:13.905889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.060 pt2 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.060 "name": "raid_bdev1", 00:17:15.060 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:15.060 "strip_size_kb": 0, 00:17:15.060 "state": "configuring", 00:17:15.060 "raid_level": "raid1", 00:17:15.060 "superblock": true, 00:17:15.060 "num_base_bdevs": 3, 00:17:15.060 "num_base_bdevs_discovered": 1, 00:17:15.060 "num_base_bdevs_operational": 2, 00:17:15.060 "base_bdevs_list": [ 00:17:15.060 { 00:17:15.060 "name": null, 00:17:15.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.061 "is_configured": false, 00:17:15.061 "data_offset": 2048, 00:17:15.061 "data_size": 63488 00:17:15.061 }, 00:17:15.061 { 00:17:15.061 "name": "pt2", 00:17:15.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.061 "is_configured": true, 00:17:15.061 "data_offset": 2048, 00:17:15.061 "data_size": 63488 00:17:15.061 }, 00:17:15.061 { 00:17:15.061 "name": null, 00:17:15.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.061 "is_configured": false, 00:17:15.061 "data_offset": 2048, 00:17:15.061 "data_size": 63488 00:17:15.061 } 00:17:15.061 ] 00:17:15.061 }' 00:17:15.061 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.061 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.320 [2024-11-06 09:10:14.333257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:15.320 [2024-11-06 09:10:14.333338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.320 [2024-11-06 09:10:14.333362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:15.320 [2024-11-06 09:10:14.333377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.320 [2024-11-06 09:10:14.333875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.320 [2024-11-06 09:10:14.333899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:15.320 [2024-11-06 09:10:14.333992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:15.320 [2024-11-06 09:10:14.334021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:15.320 [2024-11-06 09:10:14.334143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:15.320 [2024-11-06 09:10:14.334156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.320 [2024-11-06 09:10:14.334442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:15.320 [2024-11-06 09:10:14.334600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:15.320 [2024-11-06 09:10:14.334615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:15.320 [2024-11-06 09:10:14.334760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.320 pt3 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.320 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.579 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.579 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.579 "name": "raid_bdev1", 00:17:15.579 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:15.579 "strip_size_kb": 0, 00:17:15.579 "state": "online", 00:17:15.579 "raid_level": "raid1", 00:17:15.579 "superblock": true, 00:17:15.579 "num_base_bdevs": 3, 00:17:15.579 "num_base_bdevs_discovered": 2, 00:17:15.579 "num_base_bdevs_operational": 2, 00:17:15.579 "base_bdevs_list": [ 00:17:15.579 { 00:17:15.579 "name": null, 00:17:15.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.579 "is_configured": false, 00:17:15.579 "data_offset": 2048, 00:17:15.579 "data_size": 63488 00:17:15.579 }, 00:17:15.579 { 00:17:15.579 "name": "pt2", 00:17:15.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.579 "is_configured": true, 00:17:15.579 "data_offset": 2048, 00:17:15.579 "data_size": 63488 00:17:15.579 }, 00:17:15.579 { 00:17:15.579 "name": "pt3", 00:17:15.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.579 "is_configured": true, 00:17:15.579 "data_offset": 2048, 00:17:15.579 "data_size": 63488 00:17:15.579 } 00:17:15.579 ] 00:17:15.579 }' 00:17:15.579 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.579 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.838 [2024-11-06 09:10:14.712690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.838 [2024-11-06 09:10:14.712725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.838 [2024-11-06 09:10:14.712809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.838 [2024-11-06 09:10:14.712877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.838 [2024-11-06 09:10:14.712889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.838 [2024-11-06 09:10:14.776597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.838 [2024-11-06 09:10:14.776654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.838 [2024-11-06 09:10:14.776679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:15.838 [2024-11-06 09:10:14.776690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.838 [2024-11-06 09:10:14.779133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.838 [2024-11-06 09:10:14.779171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.838 [2024-11-06 09:10:14.779248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:15.838 [2024-11-06 09:10:14.779307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:15.838 [2024-11-06 09:10:14.779426] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:15.838 [2024-11-06 09:10:14.779438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.838 [2024-11-06 09:10:14.779455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:15.838 [2024-11-06 09:10:14.779506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.838 pt1 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.838 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.839 "name": "raid_bdev1", 00:17:15.839 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:15.839 "strip_size_kb": 0, 00:17:15.839 "state": "configuring", 00:17:15.839 "raid_level": "raid1", 00:17:15.839 "superblock": true, 00:17:15.839 "num_base_bdevs": 3, 00:17:15.839 "num_base_bdevs_discovered": 1, 00:17:15.839 "num_base_bdevs_operational": 2, 00:17:15.839 "base_bdevs_list": [ 00:17:15.839 { 00:17:15.839 "name": null, 00:17:15.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.839 "is_configured": false, 00:17:15.839 "data_offset": 2048, 00:17:15.839 "data_size": 63488 00:17:15.839 }, 00:17:15.839 { 00:17:15.839 "name": "pt2", 00:17:15.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.839 "is_configured": true, 00:17:15.839 "data_offset": 2048, 00:17:15.839 "data_size": 63488 00:17:15.839 }, 00:17:15.839 { 00:17:15.839 "name": null, 00:17:15.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.839 "is_configured": false, 00:17:15.839 "data_offset": 2048, 00:17:15.839 "data_size": 63488 00:17:15.839 } 00:17:15.839 ] 00:17:15.839 }' 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.839 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.098 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:16.098 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.098 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.098 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:16.358 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.358 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:16.358 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.358 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.358 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.358 [2024-11-06 09:10:15.180028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.358 [2024-11-06 09:10:15.180213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.358 [2024-11-06 09:10:15.180294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:16.359 [2024-11-06 09:10:15.180509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.359 [2024-11-06 09:10:15.180995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.359 [2024-11-06 09:10:15.181126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.359 [2024-11-06 09:10:15.181312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:16.359 [2024-11-06 09:10:15.181368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.359 [2024-11-06 09:10:15.181502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:16.359 [2024-11-06 09:10:15.181511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.359 [2024-11-06 09:10:15.181789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:16.359 [2024-11-06 09:10:15.181941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:16.359 [2024-11-06 09:10:15.181955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:16.359 [2024-11-06 09:10:15.182097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.359 pt3 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.359 "name": "raid_bdev1", 00:17:16.359 "uuid": "2bea6a9f-27de-4213-95e9-2cb824119169", 00:17:16.359 "strip_size_kb": 0, 00:17:16.359 "state": "online", 00:17:16.359 "raid_level": "raid1", 00:17:16.359 "superblock": true, 00:17:16.359 "num_base_bdevs": 3, 00:17:16.359 "num_base_bdevs_discovered": 2, 00:17:16.359 "num_base_bdevs_operational": 2, 00:17:16.359 "base_bdevs_list": [ 00:17:16.359 { 00:17:16.359 "name": null, 00:17:16.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.359 "is_configured": false, 00:17:16.359 "data_offset": 2048, 00:17:16.359 "data_size": 63488 00:17:16.359 }, 00:17:16.359 { 00:17:16.359 "name": "pt2", 00:17:16.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.359 "is_configured": true, 00:17:16.359 "data_offset": 2048, 00:17:16.359 "data_size": 63488 00:17:16.359 }, 00:17:16.359 { 00:17:16.359 "name": "pt3", 00:17:16.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.359 "is_configured": true, 00:17:16.359 "data_offset": 2048, 00:17:16.359 "data_size": 63488 00:17:16.359 } 00:17:16.359 ] 00:17:16.359 }' 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.359 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.629 [2024-11-06 09:10:15.603677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2bea6a9f-27de-4213-95e9-2cb824119169 '!=' 2bea6a9f-27de-4213-95e9-2cb824119169 ']' 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68421 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68421 ']' 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68421 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:16.629 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68421 00:17:16.888 killing process with pid 68421 00:17:16.888 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:16.888 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:16.888 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68421' 00:17:16.888 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68421 00:17:16.888 [2024-11-06 09:10:15.683907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.888 [2024-11-06 09:10:15.683996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.888 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68421 00:17:16.888 [2024-11-06 09:10:15.684058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.889 [2024-11-06 09:10:15.684073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:17.148 [2024-11-06 09:10:15.986875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.085 09:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:18.085 00:17:18.085 real 0m7.239s 00:17:18.085 user 0m11.033s 00:17:18.085 sys 0m1.558s 00:17:18.085 09:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.085 09:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.085 ************************************ 00:17:18.085 END TEST raid_superblock_test 00:17:18.085 ************************************ 00:17:18.344 09:10:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:17:18.344 09:10:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:18.344 09:10:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.344 09:10:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.344 ************************************ 00:17:18.344 START TEST raid_read_error_test 00:17:18.344 ************************************ 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.344 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aD6wpbip2X 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68856 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68856 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 68856 ']' 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.345 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.345 [2024-11-06 09:10:17.286901] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:18.345 [2024-11-06 09:10:17.287025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68856 ] 00:17:18.604 [2024-11-06 09:10:17.452860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.604 [2024-11-06 09:10:17.605554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.863 [2024-11-06 09:10:17.816082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.863 [2024-11-06 09:10:17.816334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.123 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:19.123 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:19.123 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.123 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:19.123 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.123 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 BaseBdev1_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 true 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 [2024-11-06 09:10:18.187218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:19.383 [2024-11-06 09:10:18.187292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.383 [2024-11-06 09:10:18.187317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:19.383 [2024-11-06 09:10:18.187332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.383 [2024-11-06 09:10:18.189703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.383 [2024-11-06 09:10:18.189745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:19.383 BaseBdev1 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 BaseBdev2_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 true 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 [2024-11-06 09:10:18.254300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:19.383 [2024-11-06 09:10:18.254472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.383 [2024-11-06 09:10:18.254501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:19.383 [2024-11-06 09:10:18.254516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.383 [2024-11-06 09:10:18.256839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.383 [2024-11-06 09:10:18.256882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:19.383 BaseBdev2 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 BaseBdev3_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 true 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 [2024-11-06 09:10:18.336148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:19.383 [2024-11-06 09:10:18.336203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.383 [2024-11-06 09:10:18.336224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:19.383 [2024-11-06 09:10:18.336239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.383 [2024-11-06 09:10:18.338610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.383 [2024-11-06 09:10:18.338650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:19.383 BaseBdev3 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 [2024-11-06 09:10:18.348195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.383 [2024-11-06 09:10:18.350396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.383 [2024-11-06 09:10:18.350468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:19.383 [2024-11-06 09:10:18.350672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:19.383 [2024-11-06 09:10:18.350685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.383 [2024-11-06 09:10:18.350940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:19.383 [2024-11-06 09:10:18.351110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:19.383 [2024-11-06 09:10:18.351124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:19.383 [2024-11-06 09:10:18.351303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.383 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.383 "name": "raid_bdev1", 00:17:19.383 "uuid": "1a3ba0a4-7f9e-4378-9bfe-7c42e37707fd", 00:17:19.383 "strip_size_kb": 0, 00:17:19.383 "state": "online", 00:17:19.383 "raid_level": "raid1", 00:17:19.383 "superblock": true, 00:17:19.383 "num_base_bdevs": 3, 00:17:19.383 "num_base_bdevs_discovered": 3, 00:17:19.383 "num_base_bdevs_operational": 3, 00:17:19.383 "base_bdevs_list": [ 00:17:19.383 { 00:17:19.384 "name": "BaseBdev1", 00:17:19.384 "uuid": "a3832cc0-5c6d-5b50-88bd-b26f6486d21f", 00:17:19.384 "is_configured": true, 00:17:19.384 "data_offset": 2048, 00:17:19.384 "data_size": 63488 00:17:19.384 }, 00:17:19.384 { 00:17:19.384 "name": "BaseBdev2", 00:17:19.384 "uuid": "abb6f2e0-2a24-5ddd-8e0f-9cd9ee2dcbac", 00:17:19.384 "is_configured": true, 00:17:19.384 "data_offset": 2048, 00:17:19.384 "data_size": 63488 00:17:19.384 }, 00:17:19.384 { 00:17:19.384 "name": "BaseBdev3", 00:17:19.384 "uuid": "d8a3a19b-293b-5517-bdb2-17ae3b7013e4", 00:17:19.384 "is_configured": true, 00:17:19.384 "data_offset": 2048, 00:17:19.384 "data_size": 63488 00:17:19.384 } 00:17:19.384 ] 00:17:19.384 }' 00:17:19.384 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.384 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.953 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:19.953 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:19.953 [2024-11-06 09:10:18.853096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.888 "name": "raid_bdev1", 00:17:20.888 "uuid": "1a3ba0a4-7f9e-4378-9bfe-7c42e37707fd", 00:17:20.888 "strip_size_kb": 0, 00:17:20.888 "state": "online", 00:17:20.888 "raid_level": "raid1", 00:17:20.888 "superblock": true, 00:17:20.888 "num_base_bdevs": 3, 00:17:20.888 "num_base_bdevs_discovered": 3, 00:17:20.888 "num_base_bdevs_operational": 3, 00:17:20.888 "base_bdevs_list": [ 00:17:20.888 { 00:17:20.888 "name": "BaseBdev1", 00:17:20.888 "uuid": "a3832cc0-5c6d-5b50-88bd-b26f6486d21f", 00:17:20.888 "is_configured": true, 00:17:20.888 "data_offset": 2048, 00:17:20.888 "data_size": 63488 00:17:20.888 }, 00:17:20.888 { 00:17:20.888 "name": "BaseBdev2", 00:17:20.888 "uuid": "abb6f2e0-2a24-5ddd-8e0f-9cd9ee2dcbac", 00:17:20.888 "is_configured": true, 00:17:20.888 "data_offset": 2048, 00:17:20.888 "data_size": 63488 00:17:20.888 }, 00:17:20.888 { 00:17:20.888 "name": "BaseBdev3", 00:17:20.888 "uuid": "d8a3a19b-293b-5517-bdb2-17ae3b7013e4", 00:17:20.888 "is_configured": true, 00:17:20.888 "data_offset": 2048, 00:17:20.888 "data_size": 63488 00:17:20.888 } 00:17:20.888 ] 00:17:20.888 }' 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.888 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.455 [2024-11-06 09:10:20.199469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.455 [2024-11-06 09:10:20.199511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.455 [2024-11-06 09:10:20.202481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.455 [2024-11-06 09:10:20.202549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.455 [2024-11-06 09:10:20.202686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.455 [2024-11-06 09:10:20.202703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:21.455 { 00:17:21.455 "results": [ 00:17:21.455 { 00:17:21.455 "job": "raid_bdev1", 00:17:21.455 "core_mask": "0x1", 00:17:21.455 "workload": "randrw", 00:17:21.455 "percentage": 50, 00:17:21.455 "status": "finished", 00:17:21.455 "queue_depth": 1, 00:17:21.455 "io_size": 131072, 00:17:21.455 "runtime": 1.345632, 00:17:21.455 "iops": 10694.602982093173, 00:17:21.455 "mibps": 1336.8253727616466, 00:17:21.455 "io_failed": 0, 00:17:21.455 "io_timeout": 0, 00:17:21.455 "avg_latency_us": 90.60167200662842, 00:17:21.455 "min_latency_us": 23.955020080321287, 00:17:21.455 "max_latency_us": 5027.058634538153 00:17:21.455 } 00:17:21.455 ], 00:17:21.455 "core_count": 1 00:17:21.455 } 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68856 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 68856 ']' 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 68856 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68856 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:21.455 killing process with pid 68856 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68856' 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 68856 00:17:21.455 [2024-11-06 09:10:20.242071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.455 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 68856 00:17:21.455 [2024-11-06 09:10:20.476230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aD6wpbip2X 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:22.831 ************************************ 00:17:22.831 END TEST raid_read_error_test 00:17:22.831 ************************************ 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:22.831 00:17:22.831 real 0m4.490s 00:17:22.831 user 0m5.245s 00:17:22.831 sys 0m0.587s 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:22.831 09:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.831 09:10:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:22.831 09:10:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:22.831 09:10:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.831 09:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.831 ************************************ 00:17:22.831 START TEST raid_write_error_test 00:17:22.831 ************************************ 00:17:22.831 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nwvBi4qWnN 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69007 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69007 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69007 ']' 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.832 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.832 [2024-11-06 09:10:21.831915] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:22.832 [2024-11-06 09:10:21.832227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69007 ] 00:17:23.091 [2024-11-06 09:10:21.995793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.349 [2024-11-06 09:10:22.133968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.349 [2024-11-06 09:10:22.354253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.349 [2024-11-06 09:10:22.354337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 BaseBdev1_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 true 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 [2024-11-06 09:10:22.753723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:23.916 [2024-11-06 09:10:22.753903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.916 [2024-11-06 09:10:22.753961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:23.916 [2024-11-06 09:10:22.754049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.916 [2024-11-06 09:10:22.756491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.916 [2024-11-06 09:10:22.756635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.916 BaseBdev1 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 BaseBdev2_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 true 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 [2024-11-06 09:10:22.821891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:23.916 [2024-11-06 09:10:22.821952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.916 [2024-11-06 09:10:22.821972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:23.916 [2024-11-06 09:10:22.821986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.916 [2024-11-06 09:10:22.824354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.916 [2024-11-06 09:10:22.824394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:23.916 BaseBdev2 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 BaseBdev3_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 true 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 [2024-11-06 09:10:22.905754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:23.916 [2024-11-06 09:10:22.905915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.916 [2024-11-06 09:10:22.905942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:23.916 [2024-11-06 09:10:22.905956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.916 [2024-11-06 09:10:22.908329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.916 [2024-11-06 09:10:22.908369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:23.916 BaseBdev3 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 [2024-11-06 09:10:22.917807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.916 [2024-11-06 09:10:22.919848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.916 [2024-11-06 09:10:22.919919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.916 [2024-11-06 09:10:22.920123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:23.916 [2024-11-06 09:10:22.920136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:23.916 [2024-11-06 09:10:22.920418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:23.916 [2024-11-06 09:10:22.920594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:23.916 [2024-11-06 09:10:22.920607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:23.916 [2024-11-06 09:10:22.920749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.175 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.175 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.175 "name": "raid_bdev1", 00:17:24.175 "uuid": "96b4a815-c596-449b-8abb-3b3de3a398ed", 00:17:24.175 "strip_size_kb": 0, 00:17:24.175 "state": "online", 00:17:24.175 "raid_level": "raid1", 00:17:24.175 "superblock": true, 00:17:24.175 "num_base_bdevs": 3, 00:17:24.175 "num_base_bdevs_discovered": 3, 00:17:24.175 "num_base_bdevs_operational": 3, 00:17:24.175 "base_bdevs_list": [ 00:17:24.175 { 00:17:24.175 "name": "BaseBdev1", 00:17:24.175 "uuid": "9c1e31dc-59e9-5196-bd79-c5366602a5a6", 00:17:24.175 "is_configured": true, 00:17:24.175 "data_offset": 2048, 00:17:24.175 "data_size": 63488 00:17:24.175 }, 00:17:24.175 { 00:17:24.175 "name": "BaseBdev2", 00:17:24.175 "uuid": "474aaf25-f555-5c79-9355-78252c7c16f9", 00:17:24.175 "is_configured": true, 00:17:24.175 "data_offset": 2048, 00:17:24.175 "data_size": 63488 00:17:24.175 }, 00:17:24.175 { 00:17:24.175 "name": "BaseBdev3", 00:17:24.175 "uuid": "1d21dd34-614d-51a7-97f8-22cda36b401c", 00:17:24.175 "is_configured": true, 00:17:24.175 "data_offset": 2048, 00:17:24.175 "data_size": 63488 00:17:24.175 } 00:17:24.175 ] 00:17:24.175 }' 00:17:24.175 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.175 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.434 09:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:24.434 09:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:24.434 [2024-11-06 09:10:23.431140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:25.392 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.393 [2024-11-06 09:10:24.367192] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:25.393 [2024-11-06 09:10:24.367254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.393 [2024-11-06 09:10:24.367485] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.393 "name": "raid_bdev1", 00:17:25.393 "uuid": "96b4a815-c596-449b-8abb-3b3de3a398ed", 00:17:25.393 "strip_size_kb": 0, 00:17:25.393 "state": "online", 00:17:25.393 "raid_level": "raid1", 00:17:25.393 "superblock": true, 00:17:25.393 "num_base_bdevs": 3, 00:17:25.393 "num_base_bdevs_discovered": 2, 00:17:25.393 "num_base_bdevs_operational": 2, 00:17:25.393 "base_bdevs_list": [ 00:17:25.393 { 00:17:25.393 "name": null, 00:17:25.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.393 "is_configured": false, 00:17:25.393 "data_offset": 0, 00:17:25.393 "data_size": 63488 00:17:25.393 }, 00:17:25.393 { 00:17:25.393 "name": "BaseBdev2", 00:17:25.393 "uuid": "474aaf25-f555-5c79-9355-78252c7c16f9", 00:17:25.393 "is_configured": true, 00:17:25.393 "data_offset": 2048, 00:17:25.393 "data_size": 63488 00:17:25.393 }, 00:17:25.393 { 00:17:25.393 "name": "BaseBdev3", 00:17:25.393 "uuid": "1d21dd34-614d-51a7-97f8-22cda36b401c", 00:17:25.393 "is_configured": true, 00:17:25.393 "data_offset": 2048, 00:17:25.393 "data_size": 63488 00:17:25.393 } 00:17:25.393 ] 00:17:25.393 }' 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.393 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.960 [2024-11-06 09:10:24.773927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.960 [2024-11-06 09:10:24.774121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.960 [2024-11-06 09:10:24.776696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.960 [2024-11-06 09:10:24.776752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.960 [2024-11-06 09:10:24.776834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.960 [2024-11-06 09:10:24.776849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:25.960 { 00:17:25.960 "results": [ 00:17:25.960 { 00:17:25.960 "job": "raid_bdev1", 00:17:25.960 "core_mask": "0x1", 00:17:25.960 "workload": "randrw", 00:17:25.960 "percentage": 50, 00:17:25.960 "status": "finished", 00:17:25.960 "queue_depth": 1, 00:17:25.960 "io_size": 131072, 00:17:25.960 "runtime": 1.343068, 00:17:25.960 "iops": 15133.262053745604, 00:17:25.960 "mibps": 1891.6577567182005, 00:17:25.960 "io_failed": 0, 00:17:25.960 "io_timeout": 0, 00:17:25.960 "avg_latency_us": 63.444900210929816, 00:17:25.960 "min_latency_us": 23.646586345381525, 00:17:25.960 "max_latency_us": 1421.2626506024096 00:17:25.960 } 00:17:25.960 ], 00:17:25.960 "core_count": 1 00:17:25.960 } 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69007 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69007 ']' 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69007 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69007 00:17:25.960 killing process with pid 69007 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69007' 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69007 00:17:25.960 [2024-11-06 09:10:24.824924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.960 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69007 00:17:26.218 [2024-11-06 09:10:25.060343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nwvBi4qWnN 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:27.597 00:17:27.597 real 0m4.500s 00:17:27.597 user 0m5.307s 00:17:27.597 sys 0m0.577s 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:27.597 ************************************ 00:17:27.597 END TEST raid_write_error_test 00:17:27.597 ************************************ 00:17:27.597 09:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.597 09:10:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:17:27.597 09:10:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:27.597 09:10:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:27.597 09:10:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:27.597 09:10:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:27.597 09:10:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.597 ************************************ 00:17:27.597 START TEST raid_state_function_test 00:17:27.597 ************************************ 00:17:27.597 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:17:27.597 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:27.597 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69145 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:27.598 Process raid pid: 69145 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69145' 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69145 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69145 ']' 00:17:27.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:27.598 09:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.598 [2024-11-06 09:10:26.403015] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:27.598 [2024-11-06 09:10:26.403141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.598 [2024-11-06 09:10:26.586640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.856 [2024-11-06 09:10:26.709214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.114 [2024-11-06 09:10:26.925271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.115 [2024-11-06 09:10:26.925325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.373 [2024-11-06 09:10:27.255454] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.373 [2024-11-06 09:10:27.255513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.373 [2024-11-06 09:10:27.255525] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.373 [2024-11-06 09:10:27.255539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.373 [2024-11-06 09:10:27.255547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.373 [2024-11-06 09:10:27.255559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.373 [2024-11-06 09:10:27.255566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:28.373 [2024-11-06 09:10:27.255578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.373 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.374 "name": "Existed_Raid", 00:17:28.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.374 "strip_size_kb": 64, 00:17:28.374 "state": "configuring", 00:17:28.374 "raid_level": "raid0", 00:17:28.374 "superblock": false, 00:17:28.374 "num_base_bdevs": 4, 00:17:28.374 "num_base_bdevs_discovered": 0, 00:17:28.374 "num_base_bdevs_operational": 4, 00:17:28.374 "base_bdevs_list": [ 00:17:28.374 { 00:17:28.374 "name": "BaseBdev1", 00:17:28.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.374 "is_configured": false, 00:17:28.374 "data_offset": 0, 00:17:28.374 "data_size": 0 00:17:28.374 }, 00:17:28.374 { 00:17:28.374 "name": "BaseBdev2", 00:17:28.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.374 "is_configured": false, 00:17:28.374 "data_offset": 0, 00:17:28.374 "data_size": 0 00:17:28.374 }, 00:17:28.374 { 00:17:28.374 "name": "BaseBdev3", 00:17:28.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.374 "is_configured": false, 00:17:28.374 "data_offset": 0, 00:17:28.374 "data_size": 0 00:17:28.374 }, 00:17:28.374 { 00:17:28.374 "name": "BaseBdev4", 00:17:28.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.374 "is_configured": false, 00:17:28.374 "data_offset": 0, 00:17:28.374 "data_size": 0 00:17:28.374 } 00:17:28.374 ] 00:17:28.374 }' 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.374 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 [2024-11-06 09:10:27.679097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.940 [2024-11-06 09:10:27.679506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 [2024-11-06 09:10:27.687068] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.940 [2024-11-06 09:10:27.687117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.940 [2024-11-06 09:10:27.687128] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.940 [2024-11-06 09:10:27.687141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.940 [2024-11-06 09:10:27.687149] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.940 [2024-11-06 09:10:27.687160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.940 [2024-11-06 09:10:27.687168] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:28.940 [2024-11-06 09:10:27.687179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 [2024-11-06 09:10:27.733556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.940 BaseBdev1 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.940 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.941 [ 00:17:28.941 { 00:17:28.941 "name": "BaseBdev1", 00:17:28.941 "aliases": [ 00:17:28.941 "ff642fab-e1b7-4035-afc9-3be2cc34ff84" 00:17:28.941 ], 00:17:28.941 "product_name": "Malloc disk", 00:17:28.941 "block_size": 512, 00:17:28.941 "num_blocks": 65536, 00:17:28.941 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:28.941 "assigned_rate_limits": { 00:17:28.941 "rw_ios_per_sec": 0, 00:17:28.941 "rw_mbytes_per_sec": 0, 00:17:28.941 "r_mbytes_per_sec": 0, 00:17:28.941 "w_mbytes_per_sec": 0 00:17:28.941 }, 00:17:28.941 "claimed": true, 00:17:28.941 "claim_type": "exclusive_write", 00:17:28.941 "zoned": false, 00:17:28.941 "supported_io_types": { 00:17:28.941 "read": true, 00:17:28.941 "write": true, 00:17:28.941 "unmap": true, 00:17:28.941 "flush": true, 00:17:28.941 "reset": true, 00:17:28.941 "nvme_admin": false, 00:17:28.941 "nvme_io": false, 00:17:28.941 "nvme_io_md": false, 00:17:28.941 "write_zeroes": true, 00:17:28.941 "zcopy": true, 00:17:28.941 "get_zone_info": false, 00:17:28.941 "zone_management": false, 00:17:28.941 "zone_append": false, 00:17:28.941 "compare": false, 00:17:28.941 "compare_and_write": false, 00:17:28.941 "abort": true, 00:17:28.941 "seek_hole": false, 00:17:28.941 "seek_data": false, 00:17:28.941 "copy": true, 00:17:28.941 "nvme_iov_md": false 00:17:28.941 }, 00:17:28.941 "memory_domains": [ 00:17:28.941 { 00:17:28.941 "dma_device_id": "system", 00:17:28.941 "dma_device_type": 1 00:17:28.941 }, 00:17:28.941 { 00:17:28.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.941 "dma_device_type": 2 00:17:28.941 } 00:17:28.941 ], 00:17:28.941 "driver_specific": {} 00:17:28.941 } 00:17:28.941 ] 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.941 "name": "Existed_Raid", 00:17:28.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.941 "strip_size_kb": 64, 00:17:28.941 "state": "configuring", 00:17:28.941 "raid_level": "raid0", 00:17:28.941 "superblock": false, 00:17:28.941 "num_base_bdevs": 4, 00:17:28.941 "num_base_bdevs_discovered": 1, 00:17:28.941 "num_base_bdevs_operational": 4, 00:17:28.941 "base_bdevs_list": [ 00:17:28.941 { 00:17:28.941 "name": "BaseBdev1", 00:17:28.941 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:28.941 "is_configured": true, 00:17:28.941 "data_offset": 0, 00:17:28.941 "data_size": 65536 00:17:28.941 }, 00:17:28.941 { 00:17:28.941 "name": "BaseBdev2", 00:17:28.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.941 "is_configured": false, 00:17:28.941 "data_offset": 0, 00:17:28.941 "data_size": 0 00:17:28.941 }, 00:17:28.941 { 00:17:28.941 "name": "BaseBdev3", 00:17:28.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.941 "is_configured": false, 00:17:28.941 "data_offset": 0, 00:17:28.941 "data_size": 0 00:17:28.941 }, 00:17:28.941 { 00:17:28.941 "name": "BaseBdev4", 00:17:28.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.941 "is_configured": false, 00:17:28.941 "data_offset": 0, 00:17:28.941 "data_size": 0 00:17:28.941 } 00:17:28.941 ] 00:17:28.941 }' 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.941 09:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.200 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.200 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.200 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.201 [2024-11-06 09:10:28.193416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.201 [2024-11-06 09:10:28.193473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.201 [2024-11-06 09:10:28.205473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.201 [2024-11-06 09:10:28.207621] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.201 [2024-11-06 09:10:28.207791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.201 [2024-11-06 09:10:28.207813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:29.201 [2024-11-06 09:10:28.207829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:29.201 [2024-11-06 09:10:28.207837] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:29.201 [2024-11-06 09:10:28.207848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.201 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.459 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.459 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.459 "name": "Existed_Raid", 00:17:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.459 "strip_size_kb": 64, 00:17:29.459 "state": "configuring", 00:17:29.459 "raid_level": "raid0", 00:17:29.459 "superblock": false, 00:17:29.459 "num_base_bdevs": 4, 00:17:29.459 "num_base_bdevs_discovered": 1, 00:17:29.459 "num_base_bdevs_operational": 4, 00:17:29.459 "base_bdevs_list": [ 00:17:29.459 { 00:17:29.459 "name": "BaseBdev1", 00:17:29.459 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:29.459 "is_configured": true, 00:17:29.459 "data_offset": 0, 00:17:29.459 "data_size": 65536 00:17:29.459 }, 00:17:29.459 { 00:17:29.459 "name": "BaseBdev2", 00:17:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.459 "is_configured": false, 00:17:29.459 "data_offset": 0, 00:17:29.459 "data_size": 0 00:17:29.459 }, 00:17:29.459 { 00:17:29.459 "name": "BaseBdev3", 00:17:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.459 "is_configured": false, 00:17:29.459 "data_offset": 0, 00:17:29.459 "data_size": 0 00:17:29.459 }, 00:17:29.459 { 00:17:29.459 "name": "BaseBdev4", 00:17:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.459 "is_configured": false, 00:17:29.459 "data_offset": 0, 00:17:29.459 "data_size": 0 00:17:29.459 } 00:17:29.459 ] 00:17:29.459 }' 00:17:29.459 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.459 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.718 [2024-11-06 09:10:28.702384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.718 BaseBdev2 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.718 [ 00:17:29.718 { 00:17:29.718 "name": "BaseBdev2", 00:17:29.718 "aliases": [ 00:17:29.718 "01046a41-acaa-4699-bb72-a70fb2d927b0" 00:17:29.718 ], 00:17:29.718 "product_name": "Malloc disk", 00:17:29.718 "block_size": 512, 00:17:29.718 "num_blocks": 65536, 00:17:29.718 "uuid": "01046a41-acaa-4699-bb72-a70fb2d927b0", 00:17:29.718 "assigned_rate_limits": { 00:17:29.718 "rw_ios_per_sec": 0, 00:17:29.718 "rw_mbytes_per_sec": 0, 00:17:29.718 "r_mbytes_per_sec": 0, 00:17:29.718 "w_mbytes_per_sec": 0 00:17:29.718 }, 00:17:29.718 "claimed": true, 00:17:29.718 "claim_type": "exclusive_write", 00:17:29.718 "zoned": false, 00:17:29.718 "supported_io_types": { 00:17:29.718 "read": true, 00:17:29.718 "write": true, 00:17:29.718 "unmap": true, 00:17:29.718 "flush": true, 00:17:29.718 "reset": true, 00:17:29.718 "nvme_admin": false, 00:17:29.718 "nvme_io": false, 00:17:29.718 "nvme_io_md": false, 00:17:29.718 "write_zeroes": true, 00:17:29.718 "zcopy": true, 00:17:29.718 "get_zone_info": false, 00:17:29.718 "zone_management": false, 00:17:29.718 "zone_append": false, 00:17:29.718 "compare": false, 00:17:29.718 "compare_and_write": false, 00:17:29.718 "abort": true, 00:17:29.718 "seek_hole": false, 00:17:29.718 "seek_data": false, 00:17:29.718 "copy": true, 00:17:29.718 "nvme_iov_md": false 00:17:29.718 }, 00:17:29.718 "memory_domains": [ 00:17:29.718 { 00:17:29.718 "dma_device_id": "system", 00:17:29.718 "dma_device_type": 1 00:17:29.718 }, 00:17:29.718 { 00:17:29.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.718 "dma_device_type": 2 00:17:29.718 } 00:17:29.718 ], 00:17:29.718 "driver_specific": {} 00:17:29.718 } 00:17:29.718 ] 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.718 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.977 "name": "Existed_Raid", 00:17:29.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.977 "strip_size_kb": 64, 00:17:29.977 "state": "configuring", 00:17:29.977 "raid_level": "raid0", 00:17:29.977 "superblock": false, 00:17:29.977 "num_base_bdevs": 4, 00:17:29.977 "num_base_bdevs_discovered": 2, 00:17:29.977 "num_base_bdevs_operational": 4, 00:17:29.977 "base_bdevs_list": [ 00:17:29.977 { 00:17:29.977 "name": "BaseBdev1", 00:17:29.977 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:29.977 "is_configured": true, 00:17:29.977 "data_offset": 0, 00:17:29.977 "data_size": 65536 00:17:29.977 }, 00:17:29.977 { 00:17:29.977 "name": "BaseBdev2", 00:17:29.977 "uuid": "01046a41-acaa-4699-bb72-a70fb2d927b0", 00:17:29.977 "is_configured": true, 00:17:29.977 "data_offset": 0, 00:17:29.977 "data_size": 65536 00:17:29.977 }, 00:17:29.977 { 00:17:29.977 "name": "BaseBdev3", 00:17:29.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.977 "is_configured": false, 00:17:29.977 "data_offset": 0, 00:17:29.977 "data_size": 0 00:17:29.977 }, 00:17:29.977 { 00:17:29.977 "name": "BaseBdev4", 00:17:29.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.977 "is_configured": false, 00:17:29.977 "data_offset": 0, 00:17:29.977 "data_size": 0 00:17:29.977 } 00:17:29.977 ] 00:17:29.977 }' 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.977 09:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.236 [2024-11-06 09:10:29.216420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.236 BaseBdev3 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.236 [ 00:17:30.236 { 00:17:30.236 "name": "BaseBdev3", 00:17:30.236 "aliases": [ 00:17:30.236 "765ca97e-5ac9-4240-9dea-58003eba43c1" 00:17:30.236 ], 00:17:30.236 "product_name": "Malloc disk", 00:17:30.236 "block_size": 512, 00:17:30.236 "num_blocks": 65536, 00:17:30.236 "uuid": "765ca97e-5ac9-4240-9dea-58003eba43c1", 00:17:30.236 "assigned_rate_limits": { 00:17:30.236 "rw_ios_per_sec": 0, 00:17:30.236 "rw_mbytes_per_sec": 0, 00:17:30.236 "r_mbytes_per_sec": 0, 00:17:30.236 "w_mbytes_per_sec": 0 00:17:30.236 }, 00:17:30.236 "claimed": true, 00:17:30.236 "claim_type": "exclusive_write", 00:17:30.236 "zoned": false, 00:17:30.236 "supported_io_types": { 00:17:30.236 "read": true, 00:17:30.236 "write": true, 00:17:30.236 "unmap": true, 00:17:30.236 "flush": true, 00:17:30.236 "reset": true, 00:17:30.236 "nvme_admin": false, 00:17:30.236 "nvme_io": false, 00:17:30.236 "nvme_io_md": false, 00:17:30.236 "write_zeroes": true, 00:17:30.236 "zcopy": true, 00:17:30.236 "get_zone_info": false, 00:17:30.236 "zone_management": false, 00:17:30.236 "zone_append": false, 00:17:30.236 "compare": false, 00:17:30.236 "compare_and_write": false, 00:17:30.236 "abort": true, 00:17:30.236 "seek_hole": false, 00:17:30.236 "seek_data": false, 00:17:30.236 "copy": true, 00:17:30.236 "nvme_iov_md": false 00:17:30.236 }, 00:17:30.236 "memory_domains": [ 00:17:30.236 { 00:17:30.236 "dma_device_id": "system", 00:17:30.236 "dma_device_type": 1 00:17:30.236 }, 00:17:30.236 { 00:17:30.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.236 "dma_device_type": 2 00:17:30.236 } 00:17:30.236 ], 00:17:30.236 "driver_specific": {} 00:17:30.236 } 00:17:30.236 ] 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.236 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.494 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.494 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.494 "name": "Existed_Raid", 00:17:30.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.494 "strip_size_kb": 64, 00:17:30.494 "state": "configuring", 00:17:30.494 "raid_level": "raid0", 00:17:30.494 "superblock": false, 00:17:30.494 "num_base_bdevs": 4, 00:17:30.494 "num_base_bdevs_discovered": 3, 00:17:30.494 "num_base_bdevs_operational": 4, 00:17:30.494 "base_bdevs_list": [ 00:17:30.494 { 00:17:30.494 "name": "BaseBdev1", 00:17:30.494 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:30.494 "is_configured": true, 00:17:30.494 "data_offset": 0, 00:17:30.494 "data_size": 65536 00:17:30.494 }, 00:17:30.494 { 00:17:30.494 "name": "BaseBdev2", 00:17:30.494 "uuid": "01046a41-acaa-4699-bb72-a70fb2d927b0", 00:17:30.494 "is_configured": true, 00:17:30.494 "data_offset": 0, 00:17:30.494 "data_size": 65536 00:17:30.494 }, 00:17:30.494 { 00:17:30.494 "name": "BaseBdev3", 00:17:30.494 "uuid": "765ca97e-5ac9-4240-9dea-58003eba43c1", 00:17:30.494 "is_configured": true, 00:17:30.494 "data_offset": 0, 00:17:30.494 "data_size": 65536 00:17:30.495 }, 00:17:30.495 { 00:17:30.495 "name": "BaseBdev4", 00:17:30.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.495 "is_configured": false, 00:17:30.495 "data_offset": 0, 00:17:30.495 "data_size": 0 00:17:30.495 } 00:17:30.495 ] 00:17:30.495 }' 00:17:30.495 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.495 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.753 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:30.753 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.753 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.753 [2024-11-06 09:10:29.721036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:30.753 [2024-11-06 09:10:29.721091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:30.753 [2024-11-06 09:10:29.721103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:30.753 [2024-11-06 09:10:29.721416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:30.753 [2024-11-06 09:10:29.721590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:30.754 [2024-11-06 09:10:29.721605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:30.754 [2024-11-06 09:10:29.721918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.754 BaseBdev4 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.754 [ 00:17:30.754 { 00:17:30.754 "name": "BaseBdev4", 00:17:30.754 "aliases": [ 00:17:30.754 "59247310-18c2-40d8-bdca-c2bbfa2c7dbe" 00:17:30.754 ], 00:17:30.754 "product_name": "Malloc disk", 00:17:30.754 "block_size": 512, 00:17:30.754 "num_blocks": 65536, 00:17:30.754 "uuid": "59247310-18c2-40d8-bdca-c2bbfa2c7dbe", 00:17:30.754 "assigned_rate_limits": { 00:17:30.754 "rw_ios_per_sec": 0, 00:17:30.754 "rw_mbytes_per_sec": 0, 00:17:30.754 "r_mbytes_per_sec": 0, 00:17:30.754 "w_mbytes_per_sec": 0 00:17:30.754 }, 00:17:30.754 "claimed": true, 00:17:30.754 "claim_type": "exclusive_write", 00:17:30.754 "zoned": false, 00:17:30.754 "supported_io_types": { 00:17:30.754 "read": true, 00:17:30.754 "write": true, 00:17:30.754 "unmap": true, 00:17:30.754 "flush": true, 00:17:30.754 "reset": true, 00:17:30.754 "nvme_admin": false, 00:17:30.754 "nvme_io": false, 00:17:30.754 "nvme_io_md": false, 00:17:30.754 "write_zeroes": true, 00:17:30.754 "zcopy": true, 00:17:30.754 "get_zone_info": false, 00:17:30.754 "zone_management": false, 00:17:30.754 "zone_append": false, 00:17:30.754 "compare": false, 00:17:30.754 "compare_and_write": false, 00:17:30.754 "abort": true, 00:17:30.754 "seek_hole": false, 00:17:30.754 "seek_data": false, 00:17:30.754 "copy": true, 00:17:30.754 "nvme_iov_md": false 00:17:30.754 }, 00:17:30.754 "memory_domains": [ 00:17:30.754 { 00:17:30.754 "dma_device_id": "system", 00:17:30.754 "dma_device_type": 1 00:17:30.754 }, 00:17:30.754 { 00:17:30.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.754 "dma_device_type": 2 00:17:30.754 } 00:17:30.754 ], 00:17:30.754 "driver_specific": {} 00:17:30.754 } 00:17:30.754 ] 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.754 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.013 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.013 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.013 "name": "Existed_Raid", 00:17:31.013 "uuid": "aeed10f7-9d32-4638-969e-b3b0c8d63e88", 00:17:31.013 "strip_size_kb": 64, 00:17:31.013 "state": "online", 00:17:31.013 "raid_level": "raid0", 00:17:31.013 "superblock": false, 00:17:31.013 "num_base_bdevs": 4, 00:17:31.013 "num_base_bdevs_discovered": 4, 00:17:31.013 "num_base_bdevs_operational": 4, 00:17:31.013 "base_bdevs_list": [ 00:17:31.013 { 00:17:31.013 "name": "BaseBdev1", 00:17:31.013 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:31.013 "is_configured": true, 00:17:31.013 "data_offset": 0, 00:17:31.013 "data_size": 65536 00:17:31.013 }, 00:17:31.013 { 00:17:31.013 "name": "BaseBdev2", 00:17:31.013 "uuid": "01046a41-acaa-4699-bb72-a70fb2d927b0", 00:17:31.013 "is_configured": true, 00:17:31.013 "data_offset": 0, 00:17:31.013 "data_size": 65536 00:17:31.013 }, 00:17:31.013 { 00:17:31.013 "name": "BaseBdev3", 00:17:31.013 "uuid": "765ca97e-5ac9-4240-9dea-58003eba43c1", 00:17:31.013 "is_configured": true, 00:17:31.013 "data_offset": 0, 00:17:31.013 "data_size": 65536 00:17:31.013 }, 00:17:31.013 { 00:17:31.013 "name": "BaseBdev4", 00:17:31.013 "uuid": "59247310-18c2-40d8-bdca-c2bbfa2c7dbe", 00:17:31.013 "is_configured": true, 00:17:31.013 "data_offset": 0, 00:17:31.013 "data_size": 65536 00:17:31.013 } 00:17:31.013 ] 00:17:31.013 }' 00:17:31.013 09:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.013 09:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.272 [2024-11-06 09:10:30.156809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:31.272 "name": "Existed_Raid", 00:17:31.272 "aliases": [ 00:17:31.272 "aeed10f7-9d32-4638-969e-b3b0c8d63e88" 00:17:31.272 ], 00:17:31.272 "product_name": "Raid Volume", 00:17:31.272 "block_size": 512, 00:17:31.272 "num_blocks": 262144, 00:17:31.272 "uuid": "aeed10f7-9d32-4638-969e-b3b0c8d63e88", 00:17:31.272 "assigned_rate_limits": { 00:17:31.272 "rw_ios_per_sec": 0, 00:17:31.272 "rw_mbytes_per_sec": 0, 00:17:31.272 "r_mbytes_per_sec": 0, 00:17:31.272 "w_mbytes_per_sec": 0 00:17:31.272 }, 00:17:31.272 "claimed": false, 00:17:31.272 "zoned": false, 00:17:31.272 "supported_io_types": { 00:17:31.272 "read": true, 00:17:31.272 "write": true, 00:17:31.272 "unmap": true, 00:17:31.272 "flush": true, 00:17:31.272 "reset": true, 00:17:31.272 "nvme_admin": false, 00:17:31.272 "nvme_io": false, 00:17:31.272 "nvme_io_md": false, 00:17:31.272 "write_zeroes": true, 00:17:31.272 "zcopy": false, 00:17:31.272 "get_zone_info": false, 00:17:31.272 "zone_management": false, 00:17:31.272 "zone_append": false, 00:17:31.272 "compare": false, 00:17:31.272 "compare_and_write": false, 00:17:31.272 "abort": false, 00:17:31.272 "seek_hole": false, 00:17:31.272 "seek_data": false, 00:17:31.272 "copy": false, 00:17:31.272 "nvme_iov_md": false 00:17:31.272 }, 00:17:31.272 "memory_domains": [ 00:17:31.272 { 00:17:31.272 "dma_device_id": "system", 00:17:31.272 "dma_device_type": 1 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.272 "dma_device_type": 2 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "system", 00:17:31.272 "dma_device_type": 1 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.272 "dma_device_type": 2 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "system", 00:17:31.272 "dma_device_type": 1 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.272 "dma_device_type": 2 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "system", 00:17:31.272 "dma_device_type": 1 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.272 "dma_device_type": 2 00:17:31.272 } 00:17:31.272 ], 00:17:31.272 "driver_specific": { 00:17:31.272 "raid": { 00:17:31.272 "uuid": "aeed10f7-9d32-4638-969e-b3b0c8d63e88", 00:17:31.272 "strip_size_kb": 64, 00:17:31.272 "state": "online", 00:17:31.272 "raid_level": "raid0", 00:17:31.272 "superblock": false, 00:17:31.272 "num_base_bdevs": 4, 00:17:31.272 "num_base_bdevs_discovered": 4, 00:17:31.272 "num_base_bdevs_operational": 4, 00:17:31.272 "base_bdevs_list": [ 00:17:31.272 { 00:17:31.272 "name": "BaseBdev1", 00:17:31.272 "uuid": "ff642fab-e1b7-4035-afc9-3be2cc34ff84", 00:17:31.272 "is_configured": true, 00:17:31.272 "data_offset": 0, 00:17:31.272 "data_size": 65536 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "name": "BaseBdev2", 00:17:31.272 "uuid": "01046a41-acaa-4699-bb72-a70fb2d927b0", 00:17:31.272 "is_configured": true, 00:17:31.272 "data_offset": 0, 00:17:31.272 "data_size": 65536 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "name": "BaseBdev3", 00:17:31.272 "uuid": "765ca97e-5ac9-4240-9dea-58003eba43c1", 00:17:31.272 "is_configured": true, 00:17:31.272 "data_offset": 0, 00:17:31.272 "data_size": 65536 00:17:31.272 }, 00:17:31.272 { 00:17:31.272 "name": "BaseBdev4", 00:17:31.272 "uuid": "59247310-18c2-40d8-bdca-c2bbfa2c7dbe", 00:17:31.272 "is_configured": true, 00:17:31.272 "data_offset": 0, 00:17:31.272 "data_size": 65536 00:17:31.272 } 00:17:31.272 ] 00:17:31.272 } 00:17:31.272 } 00:17:31.272 }' 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:31.272 BaseBdev2 00:17:31.272 BaseBdev3 00:17:31.272 BaseBdev4' 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.272 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:31.273 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.273 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:31.273 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.273 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.273 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.532 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.532 [2024-11-06 09:10:30.484433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.532 [2024-11-06 09:10:30.484465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.532 [2024-11-06 09:10:30.484518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.790 "name": "Existed_Raid", 00:17:31.790 "uuid": "aeed10f7-9d32-4638-969e-b3b0c8d63e88", 00:17:31.790 "strip_size_kb": 64, 00:17:31.790 "state": "offline", 00:17:31.790 "raid_level": "raid0", 00:17:31.790 "superblock": false, 00:17:31.790 "num_base_bdevs": 4, 00:17:31.790 "num_base_bdevs_discovered": 3, 00:17:31.790 "num_base_bdevs_operational": 3, 00:17:31.790 "base_bdevs_list": [ 00:17:31.790 { 00:17:31.790 "name": null, 00:17:31.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.790 "is_configured": false, 00:17:31.790 "data_offset": 0, 00:17:31.790 "data_size": 65536 00:17:31.790 }, 00:17:31.790 { 00:17:31.790 "name": "BaseBdev2", 00:17:31.790 "uuid": "01046a41-acaa-4699-bb72-a70fb2d927b0", 00:17:31.790 "is_configured": true, 00:17:31.790 "data_offset": 0, 00:17:31.790 "data_size": 65536 00:17:31.790 }, 00:17:31.790 { 00:17:31.790 "name": "BaseBdev3", 00:17:31.790 "uuid": "765ca97e-5ac9-4240-9dea-58003eba43c1", 00:17:31.790 "is_configured": true, 00:17:31.790 "data_offset": 0, 00:17:31.790 "data_size": 65536 00:17:31.790 }, 00:17:31.790 { 00:17:31.790 "name": "BaseBdev4", 00:17:31.790 "uuid": "59247310-18c2-40d8-bdca-c2bbfa2c7dbe", 00:17:31.790 "is_configured": true, 00:17:31.790 "data_offset": 0, 00:17:31.790 "data_size": 65536 00:17:31.790 } 00:17:31.790 ] 00:17:31.790 }' 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.790 09:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.049 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:32.049 09:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.049 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.049 [2024-11-06 09:10:31.055451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 [2024-11-06 09:10:31.208598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:32.310 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.570 [2024-11-06 09:10:31.366220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:32.570 [2024-11-06 09:10:31.366395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.570 BaseBdev2 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.570 [ 00:17:32.570 { 00:17:32.570 "name": "BaseBdev2", 00:17:32.570 "aliases": [ 00:17:32.570 "932318e9-319a-47d9-8739-826d87db86c7" 00:17:32.570 ], 00:17:32.570 "product_name": "Malloc disk", 00:17:32.570 "block_size": 512, 00:17:32.570 "num_blocks": 65536, 00:17:32.570 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:32.570 "assigned_rate_limits": { 00:17:32.570 "rw_ios_per_sec": 0, 00:17:32.570 "rw_mbytes_per_sec": 0, 00:17:32.570 "r_mbytes_per_sec": 0, 00:17:32.570 "w_mbytes_per_sec": 0 00:17:32.570 }, 00:17:32.570 "claimed": false, 00:17:32.570 "zoned": false, 00:17:32.570 "supported_io_types": { 00:17:32.570 "read": true, 00:17:32.570 "write": true, 00:17:32.570 "unmap": true, 00:17:32.570 "flush": true, 00:17:32.570 "reset": true, 00:17:32.570 "nvme_admin": false, 00:17:32.570 "nvme_io": false, 00:17:32.570 "nvme_io_md": false, 00:17:32.570 "write_zeroes": true, 00:17:32.570 "zcopy": true, 00:17:32.570 "get_zone_info": false, 00:17:32.570 "zone_management": false, 00:17:32.570 "zone_append": false, 00:17:32.570 "compare": false, 00:17:32.570 "compare_and_write": false, 00:17:32.570 "abort": true, 00:17:32.570 "seek_hole": false, 00:17:32.570 "seek_data": false, 00:17:32.570 "copy": true, 00:17:32.570 "nvme_iov_md": false 00:17:32.570 }, 00:17:32.570 "memory_domains": [ 00:17:32.570 { 00:17:32.570 "dma_device_id": "system", 00:17:32.570 "dma_device_type": 1 00:17:32.570 }, 00:17:32.570 { 00:17:32.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.570 "dma_device_type": 2 00:17:32.570 } 00:17:32.570 ], 00:17:32.570 "driver_specific": {} 00:17:32.570 } 00:17:32.570 ] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.570 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.830 BaseBdev3 00:17:32.830 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.830 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:32.830 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:32.830 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.830 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:32.830 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 [ 00:17:32.831 { 00:17:32.831 "name": "BaseBdev3", 00:17:32.831 "aliases": [ 00:17:32.831 "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2" 00:17:32.831 ], 00:17:32.831 "product_name": "Malloc disk", 00:17:32.831 "block_size": 512, 00:17:32.831 "num_blocks": 65536, 00:17:32.831 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:32.831 "assigned_rate_limits": { 00:17:32.831 "rw_ios_per_sec": 0, 00:17:32.831 "rw_mbytes_per_sec": 0, 00:17:32.831 "r_mbytes_per_sec": 0, 00:17:32.831 "w_mbytes_per_sec": 0 00:17:32.831 }, 00:17:32.831 "claimed": false, 00:17:32.831 "zoned": false, 00:17:32.831 "supported_io_types": { 00:17:32.831 "read": true, 00:17:32.831 "write": true, 00:17:32.831 "unmap": true, 00:17:32.831 "flush": true, 00:17:32.831 "reset": true, 00:17:32.831 "nvme_admin": false, 00:17:32.831 "nvme_io": false, 00:17:32.831 "nvme_io_md": false, 00:17:32.831 "write_zeroes": true, 00:17:32.831 "zcopy": true, 00:17:32.831 "get_zone_info": false, 00:17:32.831 "zone_management": false, 00:17:32.831 "zone_append": false, 00:17:32.831 "compare": false, 00:17:32.831 "compare_and_write": false, 00:17:32.831 "abort": true, 00:17:32.831 "seek_hole": false, 00:17:32.831 "seek_data": false, 00:17:32.831 "copy": true, 00:17:32.831 "nvme_iov_md": false 00:17:32.831 }, 00:17:32.831 "memory_domains": [ 00:17:32.831 { 00:17:32.831 "dma_device_id": "system", 00:17:32.831 "dma_device_type": 1 00:17:32.831 }, 00:17:32.831 { 00:17:32.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.831 "dma_device_type": 2 00:17:32.831 } 00:17:32.831 ], 00:17:32.831 "driver_specific": {} 00:17:32.831 } 00:17:32.831 ] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 BaseBdev4 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 [ 00:17:32.831 { 00:17:32.831 "name": "BaseBdev4", 00:17:32.831 "aliases": [ 00:17:32.831 "36909915-fedf-4adc-ba25-f477544c0f8b" 00:17:32.831 ], 00:17:32.831 "product_name": "Malloc disk", 00:17:32.831 "block_size": 512, 00:17:32.831 "num_blocks": 65536, 00:17:32.831 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:32.831 "assigned_rate_limits": { 00:17:32.831 "rw_ios_per_sec": 0, 00:17:32.831 "rw_mbytes_per_sec": 0, 00:17:32.831 "r_mbytes_per_sec": 0, 00:17:32.831 "w_mbytes_per_sec": 0 00:17:32.831 }, 00:17:32.831 "claimed": false, 00:17:32.831 "zoned": false, 00:17:32.831 "supported_io_types": { 00:17:32.831 "read": true, 00:17:32.831 "write": true, 00:17:32.831 "unmap": true, 00:17:32.831 "flush": true, 00:17:32.831 "reset": true, 00:17:32.831 "nvme_admin": false, 00:17:32.831 "nvme_io": false, 00:17:32.831 "nvme_io_md": false, 00:17:32.831 "write_zeroes": true, 00:17:32.831 "zcopy": true, 00:17:32.831 "get_zone_info": false, 00:17:32.831 "zone_management": false, 00:17:32.831 "zone_append": false, 00:17:32.831 "compare": false, 00:17:32.831 "compare_and_write": false, 00:17:32.831 "abort": true, 00:17:32.831 "seek_hole": false, 00:17:32.831 "seek_data": false, 00:17:32.831 "copy": true, 00:17:32.831 "nvme_iov_md": false 00:17:32.831 }, 00:17:32.831 "memory_domains": [ 00:17:32.831 { 00:17:32.831 "dma_device_id": "system", 00:17:32.831 "dma_device_type": 1 00:17:32.831 }, 00:17:32.831 { 00:17:32.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.831 "dma_device_type": 2 00:17:32.831 } 00:17:32.831 ], 00:17:32.831 "driver_specific": {} 00:17:32.831 } 00:17:32.831 ] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 [2024-11-06 09:10:31.787363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.831 [2024-11-06 09:10:31.787533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.831 [2024-11-06 09:10:31.787633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.831 [2024-11-06 09:10:31.790050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.831 [2024-11-06 09:10:31.790228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.831 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.831 "name": "Existed_Raid", 00:17:32.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.831 "strip_size_kb": 64, 00:17:32.831 "state": "configuring", 00:17:32.831 "raid_level": "raid0", 00:17:32.831 "superblock": false, 00:17:32.831 "num_base_bdevs": 4, 00:17:32.831 "num_base_bdevs_discovered": 3, 00:17:32.831 "num_base_bdevs_operational": 4, 00:17:32.831 "base_bdevs_list": [ 00:17:32.831 { 00:17:32.831 "name": "BaseBdev1", 00:17:32.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.831 "is_configured": false, 00:17:32.831 "data_offset": 0, 00:17:32.831 "data_size": 0 00:17:32.831 }, 00:17:32.831 { 00:17:32.831 "name": "BaseBdev2", 00:17:32.831 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:32.831 "is_configured": true, 00:17:32.831 "data_offset": 0, 00:17:32.831 "data_size": 65536 00:17:32.831 }, 00:17:32.831 { 00:17:32.831 "name": "BaseBdev3", 00:17:32.832 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:32.832 "is_configured": true, 00:17:32.832 "data_offset": 0, 00:17:32.832 "data_size": 65536 00:17:32.832 }, 00:17:32.832 { 00:17:32.832 "name": "BaseBdev4", 00:17:32.832 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:32.832 "is_configured": true, 00:17:32.832 "data_offset": 0, 00:17:32.832 "data_size": 65536 00:17:32.832 } 00:17:32.832 ] 00:17:32.832 }' 00:17:32.832 09:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.832 09:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.399 [2024-11-06 09:10:32.206783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:33.399 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.400 "name": "Existed_Raid", 00:17:33.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.400 "strip_size_kb": 64, 00:17:33.400 "state": "configuring", 00:17:33.400 "raid_level": "raid0", 00:17:33.400 "superblock": false, 00:17:33.400 "num_base_bdevs": 4, 00:17:33.400 "num_base_bdevs_discovered": 2, 00:17:33.400 "num_base_bdevs_operational": 4, 00:17:33.400 "base_bdevs_list": [ 00:17:33.400 { 00:17:33.400 "name": "BaseBdev1", 00:17:33.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.400 "is_configured": false, 00:17:33.400 "data_offset": 0, 00:17:33.400 "data_size": 0 00:17:33.400 }, 00:17:33.400 { 00:17:33.400 "name": null, 00:17:33.400 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:33.400 "is_configured": false, 00:17:33.400 "data_offset": 0, 00:17:33.400 "data_size": 65536 00:17:33.400 }, 00:17:33.400 { 00:17:33.400 "name": "BaseBdev3", 00:17:33.400 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:33.400 "is_configured": true, 00:17:33.400 "data_offset": 0, 00:17:33.400 "data_size": 65536 00:17:33.400 }, 00:17:33.400 { 00:17:33.400 "name": "BaseBdev4", 00:17:33.400 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:33.400 "is_configured": true, 00:17:33.400 "data_offset": 0, 00:17:33.400 "data_size": 65536 00:17:33.400 } 00:17:33.400 ] 00:17:33.400 }' 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.400 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.658 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.658 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.658 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:33.658 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.658 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.917 [2024-11-06 09:10:32.736468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.917 BaseBdev1 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.917 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.917 [ 00:17:33.917 { 00:17:33.917 "name": "BaseBdev1", 00:17:33.917 "aliases": [ 00:17:33.917 "46b8f2bf-df04-4ec3-bd34-53462f48bbae" 00:17:33.917 ], 00:17:33.917 "product_name": "Malloc disk", 00:17:33.917 "block_size": 512, 00:17:33.917 "num_blocks": 65536, 00:17:33.917 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:33.917 "assigned_rate_limits": { 00:17:33.917 "rw_ios_per_sec": 0, 00:17:33.917 "rw_mbytes_per_sec": 0, 00:17:33.917 "r_mbytes_per_sec": 0, 00:17:33.917 "w_mbytes_per_sec": 0 00:17:33.917 }, 00:17:33.917 "claimed": true, 00:17:33.917 "claim_type": "exclusive_write", 00:17:33.917 "zoned": false, 00:17:33.917 "supported_io_types": { 00:17:33.917 "read": true, 00:17:33.917 "write": true, 00:17:33.917 "unmap": true, 00:17:33.917 "flush": true, 00:17:33.917 "reset": true, 00:17:33.917 "nvme_admin": false, 00:17:33.917 "nvme_io": false, 00:17:33.917 "nvme_io_md": false, 00:17:33.917 "write_zeroes": true, 00:17:33.917 "zcopy": true, 00:17:33.917 "get_zone_info": false, 00:17:33.917 "zone_management": false, 00:17:33.917 "zone_append": false, 00:17:33.917 "compare": false, 00:17:33.917 "compare_and_write": false, 00:17:33.917 "abort": true, 00:17:33.917 "seek_hole": false, 00:17:33.917 "seek_data": false, 00:17:33.917 "copy": true, 00:17:33.917 "nvme_iov_md": false 00:17:33.917 }, 00:17:33.917 "memory_domains": [ 00:17:33.917 { 00:17:33.918 "dma_device_id": "system", 00:17:33.918 "dma_device_type": 1 00:17:33.918 }, 00:17:33.918 { 00:17:33.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.918 "dma_device_type": 2 00:17:33.918 } 00:17:33.918 ], 00:17:33.918 "driver_specific": {} 00:17:33.918 } 00:17:33.918 ] 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.918 "name": "Existed_Raid", 00:17:33.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.918 "strip_size_kb": 64, 00:17:33.918 "state": "configuring", 00:17:33.918 "raid_level": "raid0", 00:17:33.918 "superblock": false, 00:17:33.918 "num_base_bdevs": 4, 00:17:33.918 "num_base_bdevs_discovered": 3, 00:17:33.918 "num_base_bdevs_operational": 4, 00:17:33.918 "base_bdevs_list": [ 00:17:33.918 { 00:17:33.918 "name": "BaseBdev1", 00:17:33.918 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:33.918 "is_configured": true, 00:17:33.918 "data_offset": 0, 00:17:33.918 "data_size": 65536 00:17:33.918 }, 00:17:33.918 { 00:17:33.918 "name": null, 00:17:33.918 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:33.918 "is_configured": false, 00:17:33.918 "data_offset": 0, 00:17:33.918 "data_size": 65536 00:17:33.918 }, 00:17:33.918 { 00:17:33.918 "name": "BaseBdev3", 00:17:33.918 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:33.918 "is_configured": true, 00:17:33.918 "data_offset": 0, 00:17:33.918 "data_size": 65536 00:17:33.918 }, 00:17:33.918 { 00:17:33.918 "name": "BaseBdev4", 00:17:33.918 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:33.918 "is_configured": true, 00:17:33.918 "data_offset": 0, 00:17:33.918 "data_size": 65536 00:17:33.918 } 00:17:33.918 ] 00:17:33.918 }' 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.918 09:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.176 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:34.176 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.176 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.176 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.435 [2024-11-06 09:10:33.259995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.435 "name": "Existed_Raid", 00:17:34.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.435 "strip_size_kb": 64, 00:17:34.435 "state": "configuring", 00:17:34.435 "raid_level": "raid0", 00:17:34.435 "superblock": false, 00:17:34.435 "num_base_bdevs": 4, 00:17:34.435 "num_base_bdevs_discovered": 2, 00:17:34.435 "num_base_bdevs_operational": 4, 00:17:34.435 "base_bdevs_list": [ 00:17:34.435 { 00:17:34.435 "name": "BaseBdev1", 00:17:34.435 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:34.435 "is_configured": true, 00:17:34.435 "data_offset": 0, 00:17:34.435 "data_size": 65536 00:17:34.435 }, 00:17:34.435 { 00:17:34.435 "name": null, 00:17:34.435 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:34.435 "is_configured": false, 00:17:34.435 "data_offset": 0, 00:17:34.435 "data_size": 65536 00:17:34.435 }, 00:17:34.435 { 00:17:34.435 "name": null, 00:17:34.435 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:34.435 "is_configured": false, 00:17:34.435 "data_offset": 0, 00:17:34.435 "data_size": 65536 00:17:34.435 }, 00:17:34.435 { 00:17:34.435 "name": "BaseBdev4", 00:17:34.435 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:34.435 "is_configured": true, 00:17:34.435 "data_offset": 0, 00:17:34.435 "data_size": 65536 00:17:34.435 } 00:17:34.435 ] 00:17:34.435 }' 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.435 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 [2024-11-06 09:10:33.715384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:34.693 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.694 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.953 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.953 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.953 "name": "Existed_Raid", 00:17:34.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.953 "strip_size_kb": 64, 00:17:34.953 "state": "configuring", 00:17:34.953 "raid_level": "raid0", 00:17:34.953 "superblock": false, 00:17:34.953 "num_base_bdevs": 4, 00:17:34.953 "num_base_bdevs_discovered": 3, 00:17:34.953 "num_base_bdevs_operational": 4, 00:17:34.953 "base_bdevs_list": [ 00:17:34.953 { 00:17:34.953 "name": "BaseBdev1", 00:17:34.953 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:34.953 "is_configured": true, 00:17:34.953 "data_offset": 0, 00:17:34.953 "data_size": 65536 00:17:34.953 }, 00:17:34.953 { 00:17:34.953 "name": null, 00:17:34.953 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:34.953 "is_configured": false, 00:17:34.953 "data_offset": 0, 00:17:34.953 "data_size": 65536 00:17:34.953 }, 00:17:34.953 { 00:17:34.953 "name": "BaseBdev3", 00:17:34.953 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:34.953 "is_configured": true, 00:17:34.953 "data_offset": 0, 00:17:34.953 "data_size": 65536 00:17:34.953 }, 00:17:34.953 { 00:17:34.953 "name": "BaseBdev4", 00:17:34.953 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:34.953 "is_configured": true, 00:17:34.953 "data_offset": 0, 00:17:34.953 "data_size": 65536 00:17:34.953 } 00:17:34.953 ] 00:17:34.953 }' 00:17:34.953 09:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.953 09:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.212 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.212 [2024-11-06 09:10:34.174771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.470 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.470 "name": "Existed_Raid", 00:17:35.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.470 "strip_size_kb": 64, 00:17:35.470 "state": "configuring", 00:17:35.470 "raid_level": "raid0", 00:17:35.470 "superblock": false, 00:17:35.470 "num_base_bdevs": 4, 00:17:35.470 "num_base_bdevs_discovered": 2, 00:17:35.470 "num_base_bdevs_operational": 4, 00:17:35.470 "base_bdevs_list": [ 00:17:35.470 { 00:17:35.470 "name": null, 00:17:35.470 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:35.470 "is_configured": false, 00:17:35.470 "data_offset": 0, 00:17:35.470 "data_size": 65536 00:17:35.470 }, 00:17:35.470 { 00:17:35.470 "name": null, 00:17:35.470 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:35.470 "is_configured": false, 00:17:35.470 "data_offset": 0, 00:17:35.470 "data_size": 65536 00:17:35.470 }, 00:17:35.470 { 00:17:35.470 "name": "BaseBdev3", 00:17:35.470 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:35.470 "is_configured": true, 00:17:35.470 "data_offset": 0, 00:17:35.470 "data_size": 65536 00:17:35.471 }, 00:17:35.471 { 00:17:35.471 "name": "BaseBdev4", 00:17:35.471 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:35.471 "is_configured": true, 00:17:35.471 "data_offset": 0, 00:17:35.471 "data_size": 65536 00:17:35.471 } 00:17:35.471 ] 00:17:35.471 }' 00:17:35.471 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.471 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.730 [2024-11-06 09:10:34.725928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.730 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.989 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.989 "name": "Existed_Raid", 00:17:35.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.989 "strip_size_kb": 64, 00:17:35.989 "state": "configuring", 00:17:35.989 "raid_level": "raid0", 00:17:35.989 "superblock": false, 00:17:35.989 "num_base_bdevs": 4, 00:17:35.989 "num_base_bdevs_discovered": 3, 00:17:35.989 "num_base_bdevs_operational": 4, 00:17:35.989 "base_bdevs_list": [ 00:17:35.989 { 00:17:35.989 "name": null, 00:17:35.989 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:35.989 "is_configured": false, 00:17:35.989 "data_offset": 0, 00:17:35.989 "data_size": 65536 00:17:35.989 }, 00:17:35.989 { 00:17:35.989 "name": "BaseBdev2", 00:17:35.989 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:35.989 "is_configured": true, 00:17:35.989 "data_offset": 0, 00:17:35.989 "data_size": 65536 00:17:35.989 }, 00:17:35.989 { 00:17:35.989 "name": "BaseBdev3", 00:17:35.989 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:35.989 "is_configured": true, 00:17:35.989 "data_offset": 0, 00:17:35.989 "data_size": 65536 00:17:35.989 }, 00:17:35.990 { 00:17:35.990 "name": "BaseBdev4", 00:17:35.990 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:35.990 "is_configured": true, 00:17:35.990 "data_offset": 0, 00:17:35.990 "data_size": 65536 00:17:35.990 } 00:17:35.990 ] 00:17:35.990 }' 00:17:35.990 09:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.990 09:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.248 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:36.248 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.248 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46b8f2bf-df04-4ec3-bd34-53462f48bbae 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.249 [2024-11-06 09:10:35.272298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:36.249 [2024-11-06 09:10:35.272515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:36.249 [2024-11-06 09:10:35.272535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:36.249 [2024-11-06 09:10:35.272836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:36.249 [2024-11-06 09:10:35.272976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:36.249 [2024-11-06 09:10:35.272990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:36.249 [2024-11-06 09:10:35.273243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.249 NewBaseBdev 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.249 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.508 [ 00:17:36.508 { 00:17:36.508 "name": "NewBaseBdev", 00:17:36.508 "aliases": [ 00:17:36.508 "46b8f2bf-df04-4ec3-bd34-53462f48bbae" 00:17:36.508 ], 00:17:36.508 "product_name": "Malloc disk", 00:17:36.508 "block_size": 512, 00:17:36.508 "num_blocks": 65536, 00:17:36.508 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:36.508 "assigned_rate_limits": { 00:17:36.508 "rw_ios_per_sec": 0, 00:17:36.508 "rw_mbytes_per_sec": 0, 00:17:36.508 "r_mbytes_per_sec": 0, 00:17:36.508 "w_mbytes_per_sec": 0 00:17:36.508 }, 00:17:36.508 "claimed": true, 00:17:36.508 "claim_type": "exclusive_write", 00:17:36.508 "zoned": false, 00:17:36.508 "supported_io_types": { 00:17:36.508 "read": true, 00:17:36.508 "write": true, 00:17:36.508 "unmap": true, 00:17:36.508 "flush": true, 00:17:36.508 "reset": true, 00:17:36.508 "nvme_admin": false, 00:17:36.508 "nvme_io": false, 00:17:36.508 "nvme_io_md": false, 00:17:36.508 "write_zeroes": true, 00:17:36.508 "zcopy": true, 00:17:36.508 "get_zone_info": false, 00:17:36.508 "zone_management": false, 00:17:36.508 "zone_append": false, 00:17:36.508 "compare": false, 00:17:36.508 "compare_and_write": false, 00:17:36.508 "abort": true, 00:17:36.508 "seek_hole": false, 00:17:36.508 "seek_data": false, 00:17:36.508 "copy": true, 00:17:36.508 "nvme_iov_md": false 00:17:36.508 }, 00:17:36.508 "memory_domains": [ 00:17:36.508 { 00:17:36.508 "dma_device_id": "system", 00:17:36.508 "dma_device_type": 1 00:17:36.508 }, 00:17:36.508 { 00:17:36.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.508 "dma_device_type": 2 00:17:36.508 } 00:17:36.508 ], 00:17:36.508 "driver_specific": {} 00:17:36.508 } 00:17:36.508 ] 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.508 "name": "Existed_Raid", 00:17:36.508 "uuid": "57ab706d-6790-4765-b37a-593708ed7078", 00:17:36.508 "strip_size_kb": 64, 00:17:36.508 "state": "online", 00:17:36.508 "raid_level": "raid0", 00:17:36.508 "superblock": false, 00:17:36.508 "num_base_bdevs": 4, 00:17:36.508 "num_base_bdevs_discovered": 4, 00:17:36.508 "num_base_bdevs_operational": 4, 00:17:36.508 "base_bdevs_list": [ 00:17:36.508 { 00:17:36.508 "name": "NewBaseBdev", 00:17:36.508 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:36.508 "is_configured": true, 00:17:36.508 "data_offset": 0, 00:17:36.508 "data_size": 65536 00:17:36.508 }, 00:17:36.508 { 00:17:36.508 "name": "BaseBdev2", 00:17:36.508 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:36.508 "is_configured": true, 00:17:36.508 "data_offset": 0, 00:17:36.508 "data_size": 65536 00:17:36.508 }, 00:17:36.508 { 00:17:36.508 "name": "BaseBdev3", 00:17:36.508 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:36.508 "is_configured": true, 00:17:36.508 "data_offset": 0, 00:17:36.508 "data_size": 65536 00:17:36.508 }, 00:17:36.508 { 00:17:36.508 "name": "BaseBdev4", 00:17:36.508 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:36.508 "is_configured": true, 00:17:36.508 "data_offset": 0, 00:17:36.508 "data_size": 65536 00:17:36.508 } 00:17:36.508 ] 00:17:36.508 }' 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.508 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.768 [2024-11-06 09:10:35.704083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.768 "name": "Existed_Raid", 00:17:36.768 "aliases": [ 00:17:36.768 "57ab706d-6790-4765-b37a-593708ed7078" 00:17:36.768 ], 00:17:36.768 "product_name": "Raid Volume", 00:17:36.768 "block_size": 512, 00:17:36.768 "num_blocks": 262144, 00:17:36.768 "uuid": "57ab706d-6790-4765-b37a-593708ed7078", 00:17:36.768 "assigned_rate_limits": { 00:17:36.768 "rw_ios_per_sec": 0, 00:17:36.768 "rw_mbytes_per_sec": 0, 00:17:36.768 "r_mbytes_per_sec": 0, 00:17:36.768 "w_mbytes_per_sec": 0 00:17:36.768 }, 00:17:36.768 "claimed": false, 00:17:36.768 "zoned": false, 00:17:36.768 "supported_io_types": { 00:17:36.768 "read": true, 00:17:36.768 "write": true, 00:17:36.768 "unmap": true, 00:17:36.768 "flush": true, 00:17:36.768 "reset": true, 00:17:36.768 "nvme_admin": false, 00:17:36.768 "nvme_io": false, 00:17:36.768 "nvme_io_md": false, 00:17:36.768 "write_zeroes": true, 00:17:36.768 "zcopy": false, 00:17:36.768 "get_zone_info": false, 00:17:36.768 "zone_management": false, 00:17:36.768 "zone_append": false, 00:17:36.768 "compare": false, 00:17:36.768 "compare_and_write": false, 00:17:36.768 "abort": false, 00:17:36.768 "seek_hole": false, 00:17:36.768 "seek_data": false, 00:17:36.768 "copy": false, 00:17:36.768 "nvme_iov_md": false 00:17:36.768 }, 00:17:36.768 "memory_domains": [ 00:17:36.768 { 00:17:36.768 "dma_device_id": "system", 00:17:36.768 "dma_device_type": 1 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.768 "dma_device_type": 2 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "system", 00:17:36.768 "dma_device_type": 1 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.768 "dma_device_type": 2 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "system", 00:17:36.768 "dma_device_type": 1 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.768 "dma_device_type": 2 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "system", 00:17:36.768 "dma_device_type": 1 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.768 "dma_device_type": 2 00:17:36.768 } 00:17:36.768 ], 00:17:36.768 "driver_specific": { 00:17:36.768 "raid": { 00:17:36.768 "uuid": "57ab706d-6790-4765-b37a-593708ed7078", 00:17:36.768 "strip_size_kb": 64, 00:17:36.768 "state": "online", 00:17:36.768 "raid_level": "raid0", 00:17:36.768 "superblock": false, 00:17:36.768 "num_base_bdevs": 4, 00:17:36.768 "num_base_bdevs_discovered": 4, 00:17:36.768 "num_base_bdevs_operational": 4, 00:17:36.768 "base_bdevs_list": [ 00:17:36.768 { 00:17:36.768 "name": "NewBaseBdev", 00:17:36.768 "uuid": "46b8f2bf-df04-4ec3-bd34-53462f48bbae", 00:17:36.768 "is_configured": true, 00:17:36.768 "data_offset": 0, 00:17:36.768 "data_size": 65536 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "name": "BaseBdev2", 00:17:36.768 "uuid": "932318e9-319a-47d9-8739-826d87db86c7", 00:17:36.768 "is_configured": true, 00:17:36.768 "data_offset": 0, 00:17:36.768 "data_size": 65536 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "name": "BaseBdev3", 00:17:36.768 "uuid": "0c8bdb1b-21fb-4337-89d9-72adabd2c7b2", 00:17:36.768 "is_configured": true, 00:17:36.768 "data_offset": 0, 00:17:36.768 "data_size": 65536 00:17:36.768 }, 00:17:36.768 { 00:17:36.768 "name": "BaseBdev4", 00:17:36.768 "uuid": "36909915-fedf-4adc-ba25-f477544c0f8b", 00:17:36.768 "is_configured": true, 00:17:36.768 "data_offset": 0, 00:17:36.768 "data_size": 65536 00:17:36.768 } 00:17:36.768 ] 00:17:36.768 } 00:17:36.768 } 00:17:36.768 }' 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:36.768 BaseBdev2 00:17:36.768 BaseBdev3 00:17:36.768 BaseBdev4' 00:17:36.768 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.028 09:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.028 [2024-11-06 09:10:36.031359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.028 [2024-11-06 09:10:36.031392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.028 [2024-11-06 09:10:36.031484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.028 [2024-11-06 09:10:36.031555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.028 [2024-11-06 09:10:36.031568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69145 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69145 ']' 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69145 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:37.028 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69145 00:17:37.287 killing process with pid 69145 00:17:37.287 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:37.287 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:37.287 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69145' 00:17:37.287 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69145 00:17:37.287 [2024-11-06 09:10:36.082772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.287 09:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69145 00:17:37.546 [2024-11-06 09:10:36.481703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:38.924 00:17:38.924 real 0m11.290s 00:17:38.924 user 0m17.874s 00:17:38.924 sys 0m2.279s 00:17:38.924 ************************************ 00:17:38.924 END TEST raid_state_function_test 00:17:38.924 ************************************ 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.924 09:10:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:38.924 09:10:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:38.924 09:10:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:38.924 09:10:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.924 ************************************ 00:17:38.924 START TEST raid_state_function_test_sb 00:17:38.924 ************************************ 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69811 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69811' 00:17:38.924 Process raid pid: 69811 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69811 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 69811 ']' 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.924 09:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.924 [2024-11-06 09:10:37.787840] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:38.924 [2024-11-06 09:10:37.788133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.183 [2024-11-06 09:10:37.971943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.183 [2024-11-06 09:10:38.095659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.442 [2024-11-06 09:10:38.306933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.442 [2024-11-06 09:10:38.306977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.701 [2024-11-06 09:10:38.633082] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.701 [2024-11-06 09:10:38.633141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.701 [2024-11-06 09:10:38.633153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.701 [2024-11-06 09:10:38.633166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.701 [2024-11-06 09:10:38.633175] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:39.701 [2024-11-06 09:10:38.633187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:39.701 [2024-11-06 09:10:38.633194] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:39.701 [2024-11-06 09:10:38.633206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.701 "name": "Existed_Raid", 00:17:39.701 "uuid": "4bc73717-078a-4251-bba7-fb0b02d3a855", 00:17:39.701 "strip_size_kb": 64, 00:17:39.701 "state": "configuring", 00:17:39.701 "raid_level": "raid0", 00:17:39.701 "superblock": true, 00:17:39.701 "num_base_bdevs": 4, 00:17:39.701 "num_base_bdevs_discovered": 0, 00:17:39.701 "num_base_bdevs_operational": 4, 00:17:39.701 "base_bdevs_list": [ 00:17:39.701 { 00:17:39.701 "name": "BaseBdev1", 00:17:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.701 "is_configured": false, 00:17:39.701 "data_offset": 0, 00:17:39.701 "data_size": 0 00:17:39.701 }, 00:17:39.701 { 00:17:39.701 "name": "BaseBdev2", 00:17:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.701 "is_configured": false, 00:17:39.701 "data_offset": 0, 00:17:39.701 "data_size": 0 00:17:39.701 }, 00:17:39.701 { 00:17:39.701 "name": "BaseBdev3", 00:17:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.701 "is_configured": false, 00:17:39.701 "data_offset": 0, 00:17:39.701 "data_size": 0 00:17:39.701 }, 00:17:39.701 { 00:17:39.701 "name": "BaseBdev4", 00:17:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.701 "is_configured": false, 00:17:39.701 "data_offset": 0, 00:17:39.701 "data_size": 0 00:17:39.701 } 00:17:39.701 ] 00:17:39.701 }' 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.701 09:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.269 [2024-11-06 09:10:39.052460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.269 [2024-11-06 09:10:39.052504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.269 [2024-11-06 09:10:39.064471] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.269 [2024-11-06 09:10:39.064636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.269 [2024-11-06 09:10:39.064657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.269 [2024-11-06 09:10:39.064671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.269 [2024-11-06 09:10:39.064679] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.269 [2024-11-06 09:10:39.064691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.269 [2024-11-06 09:10:39.064699] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.269 [2024-11-06 09:10:39.064711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.269 [2024-11-06 09:10:39.113071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.269 BaseBdev1 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.269 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.270 [ 00:17:40.270 { 00:17:40.270 "name": "BaseBdev1", 00:17:40.270 "aliases": [ 00:17:40.270 "7703c9f6-5375-4827-a795-fc1be7edfa7f" 00:17:40.270 ], 00:17:40.270 "product_name": "Malloc disk", 00:17:40.270 "block_size": 512, 00:17:40.270 "num_blocks": 65536, 00:17:40.270 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:40.270 "assigned_rate_limits": { 00:17:40.270 "rw_ios_per_sec": 0, 00:17:40.270 "rw_mbytes_per_sec": 0, 00:17:40.270 "r_mbytes_per_sec": 0, 00:17:40.270 "w_mbytes_per_sec": 0 00:17:40.270 }, 00:17:40.270 "claimed": true, 00:17:40.270 "claim_type": "exclusive_write", 00:17:40.270 "zoned": false, 00:17:40.270 "supported_io_types": { 00:17:40.270 "read": true, 00:17:40.270 "write": true, 00:17:40.270 "unmap": true, 00:17:40.270 "flush": true, 00:17:40.270 "reset": true, 00:17:40.270 "nvme_admin": false, 00:17:40.270 "nvme_io": false, 00:17:40.270 "nvme_io_md": false, 00:17:40.270 "write_zeroes": true, 00:17:40.270 "zcopy": true, 00:17:40.270 "get_zone_info": false, 00:17:40.270 "zone_management": false, 00:17:40.270 "zone_append": false, 00:17:40.270 "compare": false, 00:17:40.270 "compare_and_write": false, 00:17:40.270 "abort": true, 00:17:40.270 "seek_hole": false, 00:17:40.270 "seek_data": false, 00:17:40.270 "copy": true, 00:17:40.270 "nvme_iov_md": false 00:17:40.270 }, 00:17:40.270 "memory_domains": [ 00:17:40.270 { 00:17:40.270 "dma_device_id": "system", 00:17:40.270 "dma_device_type": 1 00:17:40.270 }, 00:17:40.270 { 00:17:40.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.270 "dma_device_type": 2 00:17:40.270 } 00:17:40.270 ], 00:17:40.270 "driver_specific": {} 00:17:40.270 } 00:17:40.270 ] 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.270 "name": "Existed_Raid", 00:17:40.270 "uuid": "f2731e9b-1430-42a8-bff6-4311ce3a30e2", 00:17:40.270 "strip_size_kb": 64, 00:17:40.270 "state": "configuring", 00:17:40.270 "raid_level": "raid0", 00:17:40.270 "superblock": true, 00:17:40.270 "num_base_bdevs": 4, 00:17:40.270 "num_base_bdevs_discovered": 1, 00:17:40.270 "num_base_bdevs_operational": 4, 00:17:40.270 "base_bdevs_list": [ 00:17:40.270 { 00:17:40.270 "name": "BaseBdev1", 00:17:40.270 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:40.270 "is_configured": true, 00:17:40.270 "data_offset": 2048, 00:17:40.270 "data_size": 63488 00:17:40.270 }, 00:17:40.270 { 00:17:40.270 "name": "BaseBdev2", 00:17:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.270 "is_configured": false, 00:17:40.270 "data_offset": 0, 00:17:40.270 "data_size": 0 00:17:40.270 }, 00:17:40.270 { 00:17:40.270 "name": "BaseBdev3", 00:17:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.270 "is_configured": false, 00:17:40.270 "data_offset": 0, 00:17:40.270 "data_size": 0 00:17:40.270 }, 00:17:40.270 { 00:17:40.270 "name": "BaseBdev4", 00:17:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.270 "is_configured": false, 00:17:40.270 "data_offset": 0, 00:17:40.270 "data_size": 0 00:17:40.270 } 00:17:40.270 ] 00:17:40.270 }' 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.270 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 [2024-11-06 09:10:39.592455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.837 [2024-11-06 09:10:39.592511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 [2024-11-06 09:10:39.604617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.837 [2024-11-06 09:10:39.606894] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.837 [2024-11-06 09:10:39.607045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.837 [2024-11-06 09:10:39.607159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.837 [2024-11-06 09:10:39.607209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.837 [2024-11-06 09:10:39.607238] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.837 [2024-11-06 09:10:39.607270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.837 "name": "Existed_Raid", 00:17:40.837 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:40.837 "strip_size_kb": 64, 00:17:40.837 "state": "configuring", 00:17:40.837 "raid_level": "raid0", 00:17:40.837 "superblock": true, 00:17:40.837 "num_base_bdevs": 4, 00:17:40.837 "num_base_bdevs_discovered": 1, 00:17:40.837 "num_base_bdevs_operational": 4, 00:17:40.837 "base_bdevs_list": [ 00:17:40.837 { 00:17:40.837 "name": "BaseBdev1", 00:17:40.837 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:40.837 "is_configured": true, 00:17:40.837 "data_offset": 2048, 00:17:40.837 "data_size": 63488 00:17:40.837 }, 00:17:40.837 { 00:17:40.837 "name": "BaseBdev2", 00:17:40.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.837 "is_configured": false, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 0 00:17:40.837 }, 00:17:40.837 { 00:17:40.837 "name": "BaseBdev3", 00:17:40.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.837 "is_configured": false, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 0 00:17:40.837 }, 00:17:40.837 { 00:17:40.837 "name": "BaseBdev4", 00:17:40.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.837 "is_configured": false, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 0 00:17:40.837 } 00:17:40.837 ] 00:17:40.837 }' 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.837 09:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.096 [2024-11-06 09:10:40.080052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.096 BaseBdev2 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.096 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.096 [ 00:17:41.096 { 00:17:41.096 "name": "BaseBdev2", 00:17:41.096 "aliases": [ 00:17:41.096 "c0f6527f-d7f8-49ed-acd8-eed6634636a4" 00:17:41.096 ], 00:17:41.096 "product_name": "Malloc disk", 00:17:41.097 "block_size": 512, 00:17:41.097 "num_blocks": 65536, 00:17:41.097 "uuid": "c0f6527f-d7f8-49ed-acd8-eed6634636a4", 00:17:41.097 "assigned_rate_limits": { 00:17:41.097 "rw_ios_per_sec": 0, 00:17:41.097 "rw_mbytes_per_sec": 0, 00:17:41.097 "r_mbytes_per_sec": 0, 00:17:41.097 "w_mbytes_per_sec": 0 00:17:41.097 }, 00:17:41.097 "claimed": true, 00:17:41.097 "claim_type": "exclusive_write", 00:17:41.097 "zoned": false, 00:17:41.097 "supported_io_types": { 00:17:41.097 "read": true, 00:17:41.097 "write": true, 00:17:41.097 "unmap": true, 00:17:41.097 "flush": true, 00:17:41.097 "reset": true, 00:17:41.097 "nvme_admin": false, 00:17:41.097 "nvme_io": false, 00:17:41.097 "nvme_io_md": false, 00:17:41.097 "write_zeroes": true, 00:17:41.097 "zcopy": true, 00:17:41.097 "get_zone_info": false, 00:17:41.097 "zone_management": false, 00:17:41.097 "zone_append": false, 00:17:41.097 "compare": false, 00:17:41.097 "compare_and_write": false, 00:17:41.097 "abort": true, 00:17:41.097 "seek_hole": false, 00:17:41.097 "seek_data": false, 00:17:41.097 "copy": true, 00:17:41.097 "nvme_iov_md": false 00:17:41.097 }, 00:17:41.097 "memory_domains": [ 00:17:41.097 { 00:17:41.097 "dma_device_id": "system", 00:17:41.097 "dma_device_type": 1 00:17:41.097 }, 00:17:41.097 { 00:17:41.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.097 "dma_device_type": 2 00:17:41.097 } 00:17:41.097 ], 00:17:41.097 "driver_specific": {} 00:17:41.097 } 00:17:41.097 ] 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.097 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.358 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.358 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.358 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.358 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.358 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.358 "name": "Existed_Raid", 00:17:41.358 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:41.358 "strip_size_kb": 64, 00:17:41.358 "state": "configuring", 00:17:41.358 "raid_level": "raid0", 00:17:41.358 "superblock": true, 00:17:41.358 "num_base_bdevs": 4, 00:17:41.358 "num_base_bdevs_discovered": 2, 00:17:41.358 "num_base_bdevs_operational": 4, 00:17:41.358 "base_bdevs_list": [ 00:17:41.358 { 00:17:41.358 "name": "BaseBdev1", 00:17:41.358 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:41.358 "is_configured": true, 00:17:41.358 "data_offset": 2048, 00:17:41.358 "data_size": 63488 00:17:41.358 }, 00:17:41.358 { 00:17:41.358 "name": "BaseBdev2", 00:17:41.358 "uuid": "c0f6527f-d7f8-49ed-acd8-eed6634636a4", 00:17:41.358 "is_configured": true, 00:17:41.358 "data_offset": 2048, 00:17:41.358 "data_size": 63488 00:17:41.358 }, 00:17:41.358 { 00:17:41.358 "name": "BaseBdev3", 00:17:41.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.358 "is_configured": false, 00:17:41.358 "data_offset": 0, 00:17:41.358 "data_size": 0 00:17:41.358 }, 00:17:41.358 { 00:17:41.358 "name": "BaseBdev4", 00:17:41.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.358 "is_configured": false, 00:17:41.358 "data_offset": 0, 00:17:41.358 "data_size": 0 00:17:41.358 } 00:17:41.358 ] 00:17:41.359 }' 00:17:41.359 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.359 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.630 [2024-11-06 09:10:40.606664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.630 BaseBdev3 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.630 [ 00:17:41.630 { 00:17:41.630 "name": "BaseBdev3", 00:17:41.630 "aliases": [ 00:17:41.630 "06d95d7c-4847-458e-a4c2-f5558f8a386c" 00:17:41.630 ], 00:17:41.630 "product_name": "Malloc disk", 00:17:41.630 "block_size": 512, 00:17:41.630 "num_blocks": 65536, 00:17:41.630 "uuid": "06d95d7c-4847-458e-a4c2-f5558f8a386c", 00:17:41.630 "assigned_rate_limits": { 00:17:41.630 "rw_ios_per_sec": 0, 00:17:41.630 "rw_mbytes_per_sec": 0, 00:17:41.630 "r_mbytes_per_sec": 0, 00:17:41.630 "w_mbytes_per_sec": 0 00:17:41.630 }, 00:17:41.630 "claimed": true, 00:17:41.630 "claim_type": "exclusive_write", 00:17:41.630 "zoned": false, 00:17:41.630 "supported_io_types": { 00:17:41.630 "read": true, 00:17:41.630 "write": true, 00:17:41.630 "unmap": true, 00:17:41.630 "flush": true, 00:17:41.630 "reset": true, 00:17:41.630 "nvme_admin": false, 00:17:41.630 "nvme_io": false, 00:17:41.630 "nvme_io_md": false, 00:17:41.630 "write_zeroes": true, 00:17:41.630 "zcopy": true, 00:17:41.630 "get_zone_info": false, 00:17:41.630 "zone_management": false, 00:17:41.630 "zone_append": false, 00:17:41.630 "compare": false, 00:17:41.630 "compare_and_write": false, 00:17:41.630 "abort": true, 00:17:41.630 "seek_hole": false, 00:17:41.630 "seek_data": false, 00:17:41.630 "copy": true, 00:17:41.630 "nvme_iov_md": false 00:17:41.630 }, 00:17:41.630 "memory_domains": [ 00:17:41.630 { 00:17:41.630 "dma_device_id": "system", 00:17:41.630 "dma_device_type": 1 00:17:41.630 }, 00:17:41.630 { 00:17:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.630 "dma_device_type": 2 00:17:41.630 } 00:17:41.630 ], 00:17:41.630 "driver_specific": {} 00:17:41.630 } 00:17:41.630 ] 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.630 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.895 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.895 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.895 "name": "Existed_Raid", 00:17:41.895 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:41.895 "strip_size_kb": 64, 00:17:41.895 "state": "configuring", 00:17:41.895 "raid_level": "raid0", 00:17:41.895 "superblock": true, 00:17:41.895 "num_base_bdevs": 4, 00:17:41.895 "num_base_bdevs_discovered": 3, 00:17:41.895 "num_base_bdevs_operational": 4, 00:17:41.895 "base_bdevs_list": [ 00:17:41.895 { 00:17:41.895 "name": "BaseBdev1", 00:17:41.895 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:41.895 "is_configured": true, 00:17:41.895 "data_offset": 2048, 00:17:41.895 "data_size": 63488 00:17:41.895 }, 00:17:41.895 { 00:17:41.895 "name": "BaseBdev2", 00:17:41.895 "uuid": "c0f6527f-d7f8-49ed-acd8-eed6634636a4", 00:17:41.895 "is_configured": true, 00:17:41.895 "data_offset": 2048, 00:17:41.895 "data_size": 63488 00:17:41.895 }, 00:17:41.895 { 00:17:41.895 "name": "BaseBdev3", 00:17:41.895 "uuid": "06d95d7c-4847-458e-a4c2-f5558f8a386c", 00:17:41.895 "is_configured": true, 00:17:41.895 "data_offset": 2048, 00:17:41.895 "data_size": 63488 00:17:41.895 }, 00:17:41.895 { 00:17:41.895 "name": "BaseBdev4", 00:17:41.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.895 "is_configured": false, 00:17:41.895 "data_offset": 0, 00:17:41.895 "data_size": 0 00:17:41.895 } 00:17:41.895 ] 00:17:41.895 }' 00:17:41.895 09:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.895 09:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.162 [2024-11-06 09:10:41.073794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.162 [2024-11-06 09:10:41.074077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:42.162 [2024-11-06 09:10:41.074096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:42.162 [2024-11-06 09:10:41.074397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:42.162 [2024-11-06 09:10:41.074554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:42.162 [2024-11-06 09:10:41.074569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:42.162 BaseBdev4 00:17:42.162 [2024-11-06 09:10:41.074705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.162 [ 00:17:42.162 { 00:17:42.162 "name": "BaseBdev4", 00:17:42.162 "aliases": [ 00:17:42.162 "6d4456d2-330f-476e-ab65-dfbb673de091" 00:17:42.162 ], 00:17:42.162 "product_name": "Malloc disk", 00:17:42.162 "block_size": 512, 00:17:42.162 "num_blocks": 65536, 00:17:42.162 "uuid": "6d4456d2-330f-476e-ab65-dfbb673de091", 00:17:42.162 "assigned_rate_limits": { 00:17:42.162 "rw_ios_per_sec": 0, 00:17:42.162 "rw_mbytes_per_sec": 0, 00:17:42.162 "r_mbytes_per_sec": 0, 00:17:42.162 "w_mbytes_per_sec": 0 00:17:42.162 }, 00:17:42.162 "claimed": true, 00:17:42.162 "claim_type": "exclusive_write", 00:17:42.162 "zoned": false, 00:17:42.162 "supported_io_types": { 00:17:42.162 "read": true, 00:17:42.162 "write": true, 00:17:42.162 "unmap": true, 00:17:42.162 "flush": true, 00:17:42.162 "reset": true, 00:17:42.162 "nvme_admin": false, 00:17:42.162 "nvme_io": false, 00:17:42.162 "nvme_io_md": false, 00:17:42.162 "write_zeroes": true, 00:17:42.162 "zcopy": true, 00:17:42.162 "get_zone_info": false, 00:17:42.162 "zone_management": false, 00:17:42.162 "zone_append": false, 00:17:42.162 "compare": false, 00:17:42.162 "compare_and_write": false, 00:17:42.162 "abort": true, 00:17:42.162 "seek_hole": false, 00:17:42.162 "seek_data": false, 00:17:42.162 "copy": true, 00:17:42.162 "nvme_iov_md": false 00:17:42.162 }, 00:17:42.162 "memory_domains": [ 00:17:42.162 { 00:17:42.162 "dma_device_id": "system", 00:17:42.162 "dma_device_type": 1 00:17:42.162 }, 00:17:42.162 { 00:17:42.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.162 "dma_device_type": 2 00:17:42.162 } 00:17:42.162 ], 00:17:42.162 "driver_specific": {} 00:17:42.162 } 00:17:42.162 ] 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.162 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.162 "name": "Existed_Raid", 00:17:42.162 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:42.162 "strip_size_kb": 64, 00:17:42.162 "state": "online", 00:17:42.162 "raid_level": "raid0", 00:17:42.162 "superblock": true, 00:17:42.163 "num_base_bdevs": 4, 00:17:42.163 "num_base_bdevs_discovered": 4, 00:17:42.163 "num_base_bdevs_operational": 4, 00:17:42.163 "base_bdevs_list": [ 00:17:42.163 { 00:17:42.163 "name": "BaseBdev1", 00:17:42.163 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:42.163 "is_configured": true, 00:17:42.163 "data_offset": 2048, 00:17:42.163 "data_size": 63488 00:17:42.163 }, 00:17:42.163 { 00:17:42.163 "name": "BaseBdev2", 00:17:42.163 "uuid": "c0f6527f-d7f8-49ed-acd8-eed6634636a4", 00:17:42.163 "is_configured": true, 00:17:42.163 "data_offset": 2048, 00:17:42.163 "data_size": 63488 00:17:42.163 }, 00:17:42.163 { 00:17:42.163 "name": "BaseBdev3", 00:17:42.163 "uuid": "06d95d7c-4847-458e-a4c2-f5558f8a386c", 00:17:42.163 "is_configured": true, 00:17:42.163 "data_offset": 2048, 00:17:42.163 "data_size": 63488 00:17:42.163 }, 00:17:42.163 { 00:17:42.163 "name": "BaseBdev4", 00:17:42.163 "uuid": "6d4456d2-330f-476e-ab65-dfbb673de091", 00:17:42.163 "is_configured": true, 00:17:42.163 "data_offset": 2048, 00:17:42.163 "data_size": 63488 00:17:42.163 } 00:17:42.163 ] 00:17:42.163 }' 00:17:42.163 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.163 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:42.753 [2024-11-06 09:10:41.578171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:42.753 "name": "Existed_Raid", 00:17:42.753 "aliases": [ 00:17:42.753 "d14925ef-d7d5-42e9-9510-206cbfcdc5b1" 00:17:42.753 ], 00:17:42.753 "product_name": "Raid Volume", 00:17:42.753 "block_size": 512, 00:17:42.753 "num_blocks": 253952, 00:17:42.753 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:42.753 "assigned_rate_limits": { 00:17:42.753 "rw_ios_per_sec": 0, 00:17:42.753 "rw_mbytes_per_sec": 0, 00:17:42.753 "r_mbytes_per_sec": 0, 00:17:42.753 "w_mbytes_per_sec": 0 00:17:42.753 }, 00:17:42.753 "claimed": false, 00:17:42.753 "zoned": false, 00:17:42.753 "supported_io_types": { 00:17:42.753 "read": true, 00:17:42.753 "write": true, 00:17:42.753 "unmap": true, 00:17:42.753 "flush": true, 00:17:42.753 "reset": true, 00:17:42.753 "nvme_admin": false, 00:17:42.753 "nvme_io": false, 00:17:42.753 "nvme_io_md": false, 00:17:42.753 "write_zeroes": true, 00:17:42.753 "zcopy": false, 00:17:42.753 "get_zone_info": false, 00:17:42.753 "zone_management": false, 00:17:42.753 "zone_append": false, 00:17:42.753 "compare": false, 00:17:42.753 "compare_and_write": false, 00:17:42.753 "abort": false, 00:17:42.753 "seek_hole": false, 00:17:42.753 "seek_data": false, 00:17:42.753 "copy": false, 00:17:42.753 "nvme_iov_md": false 00:17:42.753 }, 00:17:42.753 "memory_domains": [ 00:17:42.753 { 00:17:42.753 "dma_device_id": "system", 00:17:42.753 "dma_device_type": 1 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.753 "dma_device_type": 2 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "system", 00:17:42.753 "dma_device_type": 1 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.753 "dma_device_type": 2 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "system", 00:17:42.753 "dma_device_type": 1 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.753 "dma_device_type": 2 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "system", 00:17:42.753 "dma_device_type": 1 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.753 "dma_device_type": 2 00:17:42.753 } 00:17:42.753 ], 00:17:42.753 "driver_specific": { 00:17:42.753 "raid": { 00:17:42.753 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:42.753 "strip_size_kb": 64, 00:17:42.753 "state": "online", 00:17:42.753 "raid_level": "raid0", 00:17:42.753 "superblock": true, 00:17:42.753 "num_base_bdevs": 4, 00:17:42.753 "num_base_bdevs_discovered": 4, 00:17:42.753 "num_base_bdevs_operational": 4, 00:17:42.753 "base_bdevs_list": [ 00:17:42.753 { 00:17:42.753 "name": "BaseBdev1", 00:17:42.753 "uuid": "7703c9f6-5375-4827-a795-fc1be7edfa7f", 00:17:42.753 "is_configured": true, 00:17:42.753 "data_offset": 2048, 00:17:42.753 "data_size": 63488 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "name": "BaseBdev2", 00:17:42.753 "uuid": "c0f6527f-d7f8-49ed-acd8-eed6634636a4", 00:17:42.753 "is_configured": true, 00:17:42.753 "data_offset": 2048, 00:17:42.753 "data_size": 63488 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "name": "BaseBdev3", 00:17:42.753 "uuid": "06d95d7c-4847-458e-a4c2-f5558f8a386c", 00:17:42.753 "is_configured": true, 00:17:42.753 "data_offset": 2048, 00:17:42.753 "data_size": 63488 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "name": "BaseBdev4", 00:17:42.753 "uuid": "6d4456d2-330f-476e-ab65-dfbb673de091", 00:17:42.753 "is_configured": true, 00:17:42.753 "data_offset": 2048, 00:17:42.753 "data_size": 63488 00:17:42.753 } 00:17:42.753 ] 00:17:42.753 } 00:17:42.753 } 00:17:42.753 }' 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.753 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:42.753 BaseBdev2 00:17:42.753 BaseBdev3 00:17:42.754 BaseBdev4' 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.754 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.016 [2024-11-06 09:10:41.901906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.016 [2024-11-06 09:10:41.902048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.016 [2024-11-06 09:10:41.902132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.016 09:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.016 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.017 "name": "Existed_Raid", 00:17:43.017 "uuid": "d14925ef-d7d5-42e9-9510-206cbfcdc5b1", 00:17:43.017 "strip_size_kb": 64, 00:17:43.017 "state": "offline", 00:17:43.017 "raid_level": "raid0", 00:17:43.017 "superblock": true, 00:17:43.017 "num_base_bdevs": 4, 00:17:43.017 "num_base_bdevs_discovered": 3, 00:17:43.017 "num_base_bdevs_operational": 3, 00:17:43.017 "base_bdevs_list": [ 00:17:43.017 { 00:17:43.017 "name": null, 00:17:43.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.017 "is_configured": false, 00:17:43.017 "data_offset": 0, 00:17:43.017 "data_size": 63488 00:17:43.017 }, 00:17:43.017 { 00:17:43.017 "name": "BaseBdev2", 00:17:43.017 "uuid": "c0f6527f-d7f8-49ed-acd8-eed6634636a4", 00:17:43.017 "is_configured": true, 00:17:43.017 "data_offset": 2048, 00:17:43.017 "data_size": 63488 00:17:43.017 }, 00:17:43.017 { 00:17:43.017 "name": "BaseBdev3", 00:17:43.017 "uuid": "06d95d7c-4847-458e-a4c2-f5558f8a386c", 00:17:43.017 "is_configured": true, 00:17:43.017 "data_offset": 2048, 00:17:43.017 "data_size": 63488 00:17:43.017 }, 00:17:43.017 { 00:17:43.017 "name": "BaseBdev4", 00:17:43.017 "uuid": "6d4456d2-330f-476e-ab65-dfbb673de091", 00:17:43.017 "is_configured": true, 00:17:43.017 "data_offset": 2048, 00:17:43.017 "data_size": 63488 00:17:43.017 } 00:17:43.017 ] 00:17:43.017 }' 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.017 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 [2024-11-06 09:10:42.490459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.844 [2024-11-06 09:10:42.639196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.844 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.844 [2024-11-06 09:10:42.791680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:43.844 [2024-11-06 09:10:42.791853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:44.103 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.103 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 BaseBdev2 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 [ 00:17:44.104 { 00:17:44.104 "name": "BaseBdev2", 00:17:44.104 "aliases": [ 00:17:44.104 "24b7753b-e271-4ae9-9690-e186eb1cf4d9" 00:17:44.104 ], 00:17:44.104 "product_name": "Malloc disk", 00:17:44.104 "block_size": 512, 00:17:44.104 "num_blocks": 65536, 00:17:44.104 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:44.104 "assigned_rate_limits": { 00:17:44.104 "rw_ios_per_sec": 0, 00:17:44.104 "rw_mbytes_per_sec": 0, 00:17:44.104 "r_mbytes_per_sec": 0, 00:17:44.104 "w_mbytes_per_sec": 0 00:17:44.104 }, 00:17:44.104 "claimed": false, 00:17:44.104 "zoned": false, 00:17:44.104 "supported_io_types": { 00:17:44.104 "read": true, 00:17:44.104 "write": true, 00:17:44.104 "unmap": true, 00:17:44.104 "flush": true, 00:17:44.104 "reset": true, 00:17:44.104 "nvme_admin": false, 00:17:44.104 "nvme_io": false, 00:17:44.104 "nvme_io_md": false, 00:17:44.104 "write_zeroes": true, 00:17:44.104 "zcopy": true, 00:17:44.104 "get_zone_info": false, 00:17:44.104 "zone_management": false, 00:17:44.104 "zone_append": false, 00:17:44.104 "compare": false, 00:17:44.104 "compare_and_write": false, 00:17:44.104 "abort": true, 00:17:44.104 "seek_hole": false, 00:17:44.104 "seek_data": false, 00:17:44.104 "copy": true, 00:17:44.104 "nvme_iov_md": false 00:17:44.104 }, 00:17:44.104 "memory_domains": [ 00:17:44.104 { 00:17:44.104 "dma_device_id": "system", 00:17:44.104 "dma_device_type": 1 00:17:44.104 }, 00:17:44.104 { 00:17:44.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.104 "dma_device_type": 2 00:17:44.104 } 00:17:44.104 ], 00:17:44.104 "driver_specific": {} 00:17:44.104 } 00:17:44.104 ] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 BaseBdev3 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.104 [ 00:17:44.104 { 00:17:44.104 "name": "BaseBdev3", 00:17:44.104 "aliases": [ 00:17:44.104 "5af57463-e86b-4518-bb2e-9b47a9c5077d" 00:17:44.104 ], 00:17:44.104 "product_name": "Malloc disk", 00:17:44.104 "block_size": 512, 00:17:44.104 "num_blocks": 65536, 00:17:44.104 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:44.104 "assigned_rate_limits": { 00:17:44.104 "rw_ios_per_sec": 0, 00:17:44.104 "rw_mbytes_per_sec": 0, 00:17:44.104 "r_mbytes_per_sec": 0, 00:17:44.104 "w_mbytes_per_sec": 0 00:17:44.104 }, 00:17:44.104 "claimed": false, 00:17:44.104 "zoned": false, 00:17:44.104 "supported_io_types": { 00:17:44.104 "read": true, 00:17:44.104 "write": true, 00:17:44.104 "unmap": true, 00:17:44.104 "flush": true, 00:17:44.104 "reset": true, 00:17:44.104 "nvme_admin": false, 00:17:44.104 "nvme_io": false, 00:17:44.104 "nvme_io_md": false, 00:17:44.104 "write_zeroes": true, 00:17:44.104 "zcopy": true, 00:17:44.104 "get_zone_info": false, 00:17:44.104 "zone_management": false, 00:17:44.104 "zone_append": false, 00:17:44.104 "compare": false, 00:17:44.104 "compare_and_write": false, 00:17:44.104 "abort": true, 00:17:44.104 "seek_hole": false, 00:17:44.104 "seek_data": false, 00:17:44.104 "copy": true, 00:17:44.104 "nvme_iov_md": false 00:17:44.104 }, 00:17:44.104 "memory_domains": [ 00:17:44.104 { 00:17:44.104 "dma_device_id": "system", 00:17:44.104 "dma_device_type": 1 00:17:44.104 }, 00:17:44.104 { 00:17:44.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.104 "dma_device_type": 2 00:17:44.104 } 00:17:44.104 ], 00:17:44.104 "driver_specific": {} 00:17:44.104 } 00:17:44.104 ] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.104 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.364 BaseBdev4 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.364 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.364 [ 00:17:44.364 { 00:17:44.364 "name": "BaseBdev4", 00:17:44.364 "aliases": [ 00:17:44.364 "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b" 00:17:44.364 ], 00:17:44.364 "product_name": "Malloc disk", 00:17:44.364 "block_size": 512, 00:17:44.364 "num_blocks": 65536, 00:17:44.364 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:44.364 "assigned_rate_limits": { 00:17:44.364 "rw_ios_per_sec": 0, 00:17:44.364 "rw_mbytes_per_sec": 0, 00:17:44.364 "r_mbytes_per_sec": 0, 00:17:44.364 "w_mbytes_per_sec": 0 00:17:44.364 }, 00:17:44.364 "claimed": false, 00:17:44.364 "zoned": false, 00:17:44.364 "supported_io_types": { 00:17:44.364 "read": true, 00:17:44.364 "write": true, 00:17:44.364 "unmap": true, 00:17:44.364 "flush": true, 00:17:44.364 "reset": true, 00:17:44.364 "nvme_admin": false, 00:17:44.364 "nvme_io": false, 00:17:44.364 "nvme_io_md": false, 00:17:44.364 "write_zeroes": true, 00:17:44.364 "zcopy": true, 00:17:44.364 "get_zone_info": false, 00:17:44.364 "zone_management": false, 00:17:44.365 "zone_append": false, 00:17:44.365 "compare": false, 00:17:44.365 "compare_and_write": false, 00:17:44.365 "abort": true, 00:17:44.365 "seek_hole": false, 00:17:44.365 "seek_data": false, 00:17:44.365 "copy": true, 00:17:44.365 "nvme_iov_md": false 00:17:44.365 }, 00:17:44.365 "memory_domains": [ 00:17:44.365 { 00:17:44.365 "dma_device_id": "system", 00:17:44.365 "dma_device_type": 1 00:17:44.365 }, 00:17:44.365 { 00:17:44.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.365 "dma_device_type": 2 00:17:44.365 } 00:17:44.365 ], 00:17:44.365 "driver_specific": {} 00:17:44.365 } 00:17:44.365 ] 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.365 [2024-11-06 09:10:43.204520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.365 [2024-11-06 09:10:43.204566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.365 [2024-11-06 09:10:43.204591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.365 [2024-11-06 09:10:43.206701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.365 [2024-11-06 09:10:43.206754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.365 "name": "Existed_Raid", 00:17:44.365 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:44.365 "strip_size_kb": 64, 00:17:44.365 "state": "configuring", 00:17:44.365 "raid_level": "raid0", 00:17:44.365 "superblock": true, 00:17:44.365 "num_base_bdevs": 4, 00:17:44.365 "num_base_bdevs_discovered": 3, 00:17:44.365 "num_base_bdevs_operational": 4, 00:17:44.365 "base_bdevs_list": [ 00:17:44.365 { 00:17:44.365 "name": "BaseBdev1", 00:17:44.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.365 "is_configured": false, 00:17:44.365 "data_offset": 0, 00:17:44.365 "data_size": 0 00:17:44.365 }, 00:17:44.365 { 00:17:44.365 "name": "BaseBdev2", 00:17:44.365 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:44.365 "is_configured": true, 00:17:44.365 "data_offset": 2048, 00:17:44.365 "data_size": 63488 00:17:44.365 }, 00:17:44.365 { 00:17:44.365 "name": "BaseBdev3", 00:17:44.365 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:44.365 "is_configured": true, 00:17:44.365 "data_offset": 2048, 00:17:44.365 "data_size": 63488 00:17:44.365 }, 00:17:44.365 { 00:17:44.365 "name": "BaseBdev4", 00:17:44.365 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:44.365 "is_configured": true, 00:17:44.365 "data_offset": 2048, 00:17:44.365 "data_size": 63488 00:17:44.365 } 00:17:44.365 ] 00:17:44.365 }' 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.365 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.624 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.625 [2024-11-06 09:10:43.580362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.625 "name": "Existed_Raid", 00:17:44.625 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:44.625 "strip_size_kb": 64, 00:17:44.625 "state": "configuring", 00:17:44.625 "raid_level": "raid0", 00:17:44.625 "superblock": true, 00:17:44.625 "num_base_bdevs": 4, 00:17:44.625 "num_base_bdevs_discovered": 2, 00:17:44.625 "num_base_bdevs_operational": 4, 00:17:44.625 "base_bdevs_list": [ 00:17:44.625 { 00:17:44.625 "name": "BaseBdev1", 00:17:44.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.625 "is_configured": false, 00:17:44.625 "data_offset": 0, 00:17:44.625 "data_size": 0 00:17:44.625 }, 00:17:44.625 { 00:17:44.625 "name": null, 00:17:44.625 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:44.625 "is_configured": false, 00:17:44.625 "data_offset": 0, 00:17:44.625 "data_size": 63488 00:17:44.625 }, 00:17:44.625 { 00:17:44.625 "name": "BaseBdev3", 00:17:44.625 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:44.625 "is_configured": true, 00:17:44.625 "data_offset": 2048, 00:17:44.625 "data_size": 63488 00:17:44.625 }, 00:17:44.625 { 00:17:44.625 "name": "BaseBdev4", 00:17:44.625 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:44.625 "is_configured": true, 00:17:44.625 "data_offset": 2048, 00:17:44.625 "data_size": 63488 00:17:44.625 } 00:17:44.625 ] 00:17:44.625 }' 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.625 09:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 [2024-11-06 09:10:44.109845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.194 BaseBdev1 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 [ 00:17:45.194 { 00:17:45.194 "name": "BaseBdev1", 00:17:45.194 "aliases": [ 00:17:45.194 "4a372359-65df-485a-a9c1-09bee45f2d8e" 00:17:45.194 ], 00:17:45.194 "product_name": "Malloc disk", 00:17:45.194 "block_size": 512, 00:17:45.194 "num_blocks": 65536, 00:17:45.194 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:45.194 "assigned_rate_limits": { 00:17:45.194 "rw_ios_per_sec": 0, 00:17:45.194 "rw_mbytes_per_sec": 0, 00:17:45.194 "r_mbytes_per_sec": 0, 00:17:45.194 "w_mbytes_per_sec": 0 00:17:45.194 }, 00:17:45.194 "claimed": true, 00:17:45.194 "claim_type": "exclusive_write", 00:17:45.194 "zoned": false, 00:17:45.194 "supported_io_types": { 00:17:45.194 "read": true, 00:17:45.194 "write": true, 00:17:45.194 "unmap": true, 00:17:45.194 "flush": true, 00:17:45.194 "reset": true, 00:17:45.194 "nvme_admin": false, 00:17:45.194 "nvme_io": false, 00:17:45.194 "nvme_io_md": false, 00:17:45.194 "write_zeroes": true, 00:17:45.194 "zcopy": true, 00:17:45.194 "get_zone_info": false, 00:17:45.194 "zone_management": false, 00:17:45.194 "zone_append": false, 00:17:45.194 "compare": false, 00:17:45.194 "compare_and_write": false, 00:17:45.194 "abort": true, 00:17:45.194 "seek_hole": false, 00:17:45.194 "seek_data": false, 00:17:45.194 "copy": true, 00:17:45.194 "nvme_iov_md": false 00:17:45.194 }, 00:17:45.194 "memory_domains": [ 00:17:45.194 { 00:17:45.194 "dma_device_id": "system", 00:17:45.194 "dma_device_type": 1 00:17:45.194 }, 00:17:45.194 { 00:17:45.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.194 "dma_device_type": 2 00:17:45.194 } 00:17:45.194 ], 00:17:45.194 "driver_specific": {} 00:17:45.194 } 00:17:45.194 ] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.194 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.194 "name": "Existed_Raid", 00:17:45.194 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:45.194 "strip_size_kb": 64, 00:17:45.194 "state": "configuring", 00:17:45.194 "raid_level": "raid0", 00:17:45.194 "superblock": true, 00:17:45.194 "num_base_bdevs": 4, 00:17:45.194 "num_base_bdevs_discovered": 3, 00:17:45.194 "num_base_bdevs_operational": 4, 00:17:45.194 "base_bdevs_list": [ 00:17:45.194 { 00:17:45.194 "name": "BaseBdev1", 00:17:45.194 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:45.194 "is_configured": true, 00:17:45.194 "data_offset": 2048, 00:17:45.194 "data_size": 63488 00:17:45.195 }, 00:17:45.195 { 00:17:45.195 "name": null, 00:17:45.195 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:45.195 "is_configured": false, 00:17:45.195 "data_offset": 0, 00:17:45.195 "data_size": 63488 00:17:45.195 }, 00:17:45.195 { 00:17:45.195 "name": "BaseBdev3", 00:17:45.195 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:45.195 "is_configured": true, 00:17:45.195 "data_offset": 2048, 00:17:45.195 "data_size": 63488 00:17:45.195 }, 00:17:45.195 { 00:17:45.195 "name": "BaseBdev4", 00:17:45.195 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:45.195 "is_configured": true, 00:17:45.195 "data_offset": 2048, 00:17:45.195 "data_size": 63488 00:17:45.195 } 00:17:45.195 ] 00:17:45.195 }' 00:17:45.195 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.195 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 [2024-11-06 09:10:44.645879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.763 "name": "Existed_Raid", 00:17:45.763 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:45.763 "strip_size_kb": 64, 00:17:45.763 "state": "configuring", 00:17:45.763 "raid_level": "raid0", 00:17:45.763 "superblock": true, 00:17:45.763 "num_base_bdevs": 4, 00:17:45.763 "num_base_bdevs_discovered": 2, 00:17:45.763 "num_base_bdevs_operational": 4, 00:17:45.763 "base_bdevs_list": [ 00:17:45.763 { 00:17:45.763 "name": "BaseBdev1", 00:17:45.763 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:45.763 "is_configured": true, 00:17:45.763 "data_offset": 2048, 00:17:45.763 "data_size": 63488 00:17:45.763 }, 00:17:45.763 { 00:17:45.763 "name": null, 00:17:45.763 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:45.763 "is_configured": false, 00:17:45.763 "data_offset": 0, 00:17:45.763 "data_size": 63488 00:17:45.763 }, 00:17:45.763 { 00:17:45.763 "name": null, 00:17:45.763 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:45.763 "is_configured": false, 00:17:45.763 "data_offset": 0, 00:17:45.763 "data_size": 63488 00:17:45.763 }, 00:17:45.763 { 00:17:45.763 "name": "BaseBdev4", 00:17:45.763 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:45.763 "is_configured": true, 00:17:45.763 "data_offset": 2048, 00:17:45.763 "data_size": 63488 00:17:45.763 } 00:17:45.763 ] 00:17:45.763 }' 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.763 09:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.023 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.284 [2024-11-06 09:10:45.109857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.284 "name": "Existed_Raid", 00:17:46.284 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:46.284 "strip_size_kb": 64, 00:17:46.284 "state": "configuring", 00:17:46.284 "raid_level": "raid0", 00:17:46.284 "superblock": true, 00:17:46.284 "num_base_bdevs": 4, 00:17:46.284 "num_base_bdevs_discovered": 3, 00:17:46.284 "num_base_bdevs_operational": 4, 00:17:46.284 "base_bdevs_list": [ 00:17:46.284 { 00:17:46.284 "name": "BaseBdev1", 00:17:46.284 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:46.284 "is_configured": true, 00:17:46.284 "data_offset": 2048, 00:17:46.284 "data_size": 63488 00:17:46.284 }, 00:17:46.284 { 00:17:46.284 "name": null, 00:17:46.284 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:46.284 "is_configured": false, 00:17:46.284 "data_offset": 0, 00:17:46.284 "data_size": 63488 00:17:46.284 }, 00:17:46.284 { 00:17:46.284 "name": "BaseBdev3", 00:17:46.284 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:46.284 "is_configured": true, 00:17:46.284 "data_offset": 2048, 00:17:46.284 "data_size": 63488 00:17:46.284 }, 00:17:46.284 { 00:17:46.284 "name": "BaseBdev4", 00:17:46.284 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:46.284 "is_configured": true, 00:17:46.284 "data_offset": 2048, 00:17:46.284 "data_size": 63488 00:17:46.284 } 00:17:46.284 ] 00:17:46.284 }' 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.284 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.544 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.804 [2024-11-06 09:10:45.585890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.804 "name": "Existed_Raid", 00:17:46.804 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:46.804 "strip_size_kb": 64, 00:17:46.804 "state": "configuring", 00:17:46.804 "raid_level": "raid0", 00:17:46.804 "superblock": true, 00:17:46.804 "num_base_bdevs": 4, 00:17:46.804 "num_base_bdevs_discovered": 2, 00:17:46.804 "num_base_bdevs_operational": 4, 00:17:46.804 "base_bdevs_list": [ 00:17:46.804 { 00:17:46.804 "name": null, 00:17:46.804 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:46.804 "is_configured": false, 00:17:46.804 "data_offset": 0, 00:17:46.804 "data_size": 63488 00:17:46.804 }, 00:17:46.804 { 00:17:46.804 "name": null, 00:17:46.804 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:46.804 "is_configured": false, 00:17:46.804 "data_offset": 0, 00:17:46.804 "data_size": 63488 00:17:46.804 }, 00:17:46.804 { 00:17:46.804 "name": "BaseBdev3", 00:17:46.804 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:46.804 "is_configured": true, 00:17:46.804 "data_offset": 2048, 00:17:46.804 "data_size": 63488 00:17:46.804 }, 00:17:46.804 { 00:17:46.804 "name": "BaseBdev4", 00:17:46.804 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:46.804 "is_configured": true, 00:17:46.804 "data_offset": 2048, 00:17:46.804 "data_size": 63488 00:17:46.804 } 00:17:46.804 ] 00:17:46.804 }' 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.804 09:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.374 [2024-11-06 09:10:46.177913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.374 "name": "Existed_Raid", 00:17:47.374 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:47.374 "strip_size_kb": 64, 00:17:47.374 "state": "configuring", 00:17:47.374 "raid_level": "raid0", 00:17:47.374 "superblock": true, 00:17:47.374 "num_base_bdevs": 4, 00:17:47.374 "num_base_bdevs_discovered": 3, 00:17:47.374 "num_base_bdevs_operational": 4, 00:17:47.374 "base_bdevs_list": [ 00:17:47.374 { 00:17:47.374 "name": null, 00:17:47.374 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:47.374 "is_configured": false, 00:17:47.374 "data_offset": 0, 00:17:47.374 "data_size": 63488 00:17:47.374 }, 00:17:47.374 { 00:17:47.374 "name": "BaseBdev2", 00:17:47.374 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:47.374 "is_configured": true, 00:17:47.374 "data_offset": 2048, 00:17:47.374 "data_size": 63488 00:17:47.374 }, 00:17:47.374 { 00:17:47.374 "name": "BaseBdev3", 00:17:47.374 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:47.374 "is_configured": true, 00:17:47.374 "data_offset": 2048, 00:17:47.374 "data_size": 63488 00:17:47.374 }, 00:17:47.374 { 00:17:47.374 "name": "BaseBdev4", 00:17:47.374 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:47.374 "is_configured": true, 00:17:47.374 "data_offset": 2048, 00:17:47.374 "data_size": 63488 00:17:47.374 } 00:17:47.374 ] 00:17:47.374 }' 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.374 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.633 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.633 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:47.633 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.633 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a372359-65df-485a-a9c1-09bee45f2d8e 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.893 [2024-11-06 09:10:46.784419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:47.893 [2024-11-06 09:10:46.784860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:47.893 [2024-11-06 09:10:46.784882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:47.893 [2024-11-06 09:10:46.785157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:47.893 [2024-11-06 09:10:46.785309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:47.893 [2024-11-06 09:10:46.785324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:47.893 [2024-11-06 09:10:46.785450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.893 NewBaseBdev 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.893 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.893 [ 00:17:47.893 { 00:17:47.893 "name": "NewBaseBdev", 00:17:47.893 "aliases": [ 00:17:47.893 "4a372359-65df-485a-a9c1-09bee45f2d8e" 00:17:47.893 ], 00:17:47.893 "product_name": "Malloc disk", 00:17:47.893 "block_size": 512, 00:17:47.893 "num_blocks": 65536, 00:17:47.893 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:47.893 "assigned_rate_limits": { 00:17:47.893 "rw_ios_per_sec": 0, 00:17:47.893 "rw_mbytes_per_sec": 0, 00:17:47.893 "r_mbytes_per_sec": 0, 00:17:47.893 "w_mbytes_per_sec": 0 00:17:47.893 }, 00:17:47.893 "claimed": true, 00:17:47.893 "claim_type": "exclusive_write", 00:17:47.893 "zoned": false, 00:17:47.893 "supported_io_types": { 00:17:47.893 "read": true, 00:17:47.893 "write": true, 00:17:47.893 "unmap": true, 00:17:47.893 "flush": true, 00:17:47.893 "reset": true, 00:17:47.893 "nvme_admin": false, 00:17:47.893 "nvme_io": false, 00:17:47.893 "nvme_io_md": false, 00:17:47.893 "write_zeroes": true, 00:17:47.893 "zcopy": true, 00:17:47.893 "get_zone_info": false, 00:17:47.893 "zone_management": false, 00:17:47.894 "zone_append": false, 00:17:47.894 "compare": false, 00:17:47.894 "compare_and_write": false, 00:17:47.894 "abort": true, 00:17:47.894 "seek_hole": false, 00:17:47.894 "seek_data": false, 00:17:47.894 "copy": true, 00:17:47.894 "nvme_iov_md": false 00:17:47.894 }, 00:17:47.894 "memory_domains": [ 00:17:47.894 { 00:17:47.894 "dma_device_id": "system", 00:17:47.894 "dma_device_type": 1 00:17:47.894 }, 00:17:47.894 { 00:17:47.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.894 "dma_device_type": 2 00:17:47.894 } 00:17:47.894 ], 00:17:47.894 "driver_specific": {} 00:17:47.894 } 00:17:47.894 ] 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.894 "name": "Existed_Raid", 00:17:47.894 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:47.894 "strip_size_kb": 64, 00:17:47.894 "state": "online", 00:17:47.894 "raid_level": "raid0", 00:17:47.894 "superblock": true, 00:17:47.894 "num_base_bdevs": 4, 00:17:47.894 "num_base_bdevs_discovered": 4, 00:17:47.894 "num_base_bdevs_operational": 4, 00:17:47.894 "base_bdevs_list": [ 00:17:47.894 { 00:17:47.894 "name": "NewBaseBdev", 00:17:47.894 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:47.894 "is_configured": true, 00:17:47.894 "data_offset": 2048, 00:17:47.894 "data_size": 63488 00:17:47.894 }, 00:17:47.894 { 00:17:47.894 "name": "BaseBdev2", 00:17:47.894 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:47.894 "is_configured": true, 00:17:47.894 "data_offset": 2048, 00:17:47.894 "data_size": 63488 00:17:47.894 }, 00:17:47.894 { 00:17:47.894 "name": "BaseBdev3", 00:17:47.894 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:47.894 "is_configured": true, 00:17:47.894 "data_offset": 2048, 00:17:47.894 "data_size": 63488 00:17:47.894 }, 00:17:47.894 { 00:17:47.894 "name": "BaseBdev4", 00:17:47.894 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:47.894 "is_configured": true, 00:17:47.894 "data_offset": 2048, 00:17:47.894 "data_size": 63488 00:17:47.894 } 00:17:47.894 ] 00:17:47.894 }' 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.894 09:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.463 [2024-11-06 09:10:47.252409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.463 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.463 "name": "Existed_Raid", 00:17:48.463 "aliases": [ 00:17:48.463 "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9" 00:17:48.463 ], 00:17:48.463 "product_name": "Raid Volume", 00:17:48.463 "block_size": 512, 00:17:48.463 "num_blocks": 253952, 00:17:48.463 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:48.463 "assigned_rate_limits": { 00:17:48.463 "rw_ios_per_sec": 0, 00:17:48.463 "rw_mbytes_per_sec": 0, 00:17:48.463 "r_mbytes_per_sec": 0, 00:17:48.463 "w_mbytes_per_sec": 0 00:17:48.463 }, 00:17:48.463 "claimed": false, 00:17:48.463 "zoned": false, 00:17:48.463 "supported_io_types": { 00:17:48.463 "read": true, 00:17:48.463 "write": true, 00:17:48.463 "unmap": true, 00:17:48.463 "flush": true, 00:17:48.463 "reset": true, 00:17:48.463 "nvme_admin": false, 00:17:48.463 "nvme_io": false, 00:17:48.463 "nvme_io_md": false, 00:17:48.463 "write_zeroes": true, 00:17:48.463 "zcopy": false, 00:17:48.463 "get_zone_info": false, 00:17:48.463 "zone_management": false, 00:17:48.463 "zone_append": false, 00:17:48.463 "compare": false, 00:17:48.463 "compare_and_write": false, 00:17:48.463 "abort": false, 00:17:48.463 "seek_hole": false, 00:17:48.463 "seek_data": false, 00:17:48.463 "copy": false, 00:17:48.463 "nvme_iov_md": false 00:17:48.463 }, 00:17:48.463 "memory_domains": [ 00:17:48.463 { 00:17:48.463 "dma_device_id": "system", 00:17:48.463 "dma_device_type": 1 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.463 "dma_device_type": 2 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "system", 00:17:48.463 "dma_device_type": 1 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.463 "dma_device_type": 2 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "system", 00:17:48.463 "dma_device_type": 1 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.463 "dma_device_type": 2 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "system", 00:17:48.463 "dma_device_type": 1 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.463 "dma_device_type": 2 00:17:48.463 } 00:17:48.463 ], 00:17:48.463 "driver_specific": { 00:17:48.463 "raid": { 00:17:48.463 "uuid": "1ad87328-0c9e-4fe7-b7b5-f39d19f283e9", 00:17:48.463 "strip_size_kb": 64, 00:17:48.463 "state": "online", 00:17:48.463 "raid_level": "raid0", 00:17:48.463 "superblock": true, 00:17:48.463 "num_base_bdevs": 4, 00:17:48.463 "num_base_bdevs_discovered": 4, 00:17:48.463 "num_base_bdevs_operational": 4, 00:17:48.463 "base_bdevs_list": [ 00:17:48.463 { 00:17:48.463 "name": "NewBaseBdev", 00:17:48.463 "uuid": "4a372359-65df-485a-a9c1-09bee45f2d8e", 00:17:48.463 "is_configured": true, 00:17:48.463 "data_offset": 2048, 00:17:48.463 "data_size": 63488 00:17:48.463 }, 00:17:48.463 { 00:17:48.463 "name": "BaseBdev2", 00:17:48.463 "uuid": "24b7753b-e271-4ae9-9690-e186eb1cf4d9", 00:17:48.463 "is_configured": true, 00:17:48.463 "data_offset": 2048, 00:17:48.463 "data_size": 63488 00:17:48.463 }, 00:17:48.464 { 00:17:48.464 "name": "BaseBdev3", 00:17:48.464 "uuid": "5af57463-e86b-4518-bb2e-9b47a9c5077d", 00:17:48.464 "is_configured": true, 00:17:48.464 "data_offset": 2048, 00:17:48.464 "data_size": 63488 00:17:48.464 }, 00:17:48.464 { 00:17:48.464 "name": "BaseBdev4", 00:17:48.464 "uuid": "dcb7deea-ed3d-4d9b-b5cd-e95ae6f32f9b", 00:17:48.464 "is_configured": true, 00:17:48.464 "data_offset": 2048, 00:17:48.464 "data_size": 63488 00:17:48.464 } 00:17:48.464 ] 00:17:48.464 } 00:17:48.464 } 00:17:48.464 }' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:48.464 BaseBdev2 00:17:48.464 BaseBdev3 00:17:48.464 BaseBdev4' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.464 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.723 [2024-11-06 09:10:47.567602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.723 [2024-11-06 09:10:47.567636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.723 [2024-11-06 09:10:47.567720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.723 [2024-11-06 09:10:47.567789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.723 [2024-11-06 09:10:47.567801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69811 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 69811 ']' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 69811 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69811 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.723 killing process with pid 69811 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69811' 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 69811 00:17:48.723 [2024-11-06 09:10:47.615225] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.723 09:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 69811 00:17:49.290 [2024-11-06 09:10:48.020103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.243 09:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:50.243 00:17:50.243 real 0m11.465s 00:17:50.243 user 0m18.214s 00:17:50.243 sys 0m2.310s 00:17:50.243 09:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:50.243 09:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.243 ************************************ 00:17:50.243 END TEST raid_state_function_test_sb 00:17:50.243 ************************************ 00:17:50.243 09:10:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:50.243 09:10:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:50.243 09:10:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:50.243 09:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.243 ************************************ 00:17:50.243 START TEST raid_superblock_test 00:17:50.243 ************************************ 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70483 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70483 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70483 ']' 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.243 09:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.519 [2024-11-06 09:10:49.324460] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:50.519 [2024-11-06 09:10:49.324772] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70483 ] 00:17:50.519 [2024-11-06 09:10:49.507149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.777 [2024-11-06 09:10:49.629647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.035 [2024-11-06 09:10:49.830951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.035 [2024-11-06 09:10:49.831004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.293 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.294 malloc1 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.294 [2024-11-06 09:10:50.224311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.294 [2024-11-06 09:10:50.224376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.294 [2024-11-06 09:10:50.224400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:51.294 [2024-11-06 09:10:50.224411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.294 [2024-11-06 09:10:50.226768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.294 [2024-11-06 09:10:50.226809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.294 pt1 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.294 malloc2 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.294 [2024-11-06 09:10:50.281220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.294 [2024-11-06 09:10:50.281294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.294 [2024-11-06 09:10:50.281319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:51.294 [2024-11-06 09:10:50.281330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.294 [2024-11-06 09:10:50.283822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.294 [2024-11-06 09:10:50.283861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.294 pt2 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.294 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.553 malloc3 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.553 [2024-11-06 09:10:50.352328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:51.553 [2024-11-06 09:10:50.352381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.553 [2024-11-06 09:10:50.352404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.553 [2024-11-06 09:10:50.352415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.553 [2024-11-06 09:10:50.354785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.553 [2024-11-06 09:10:50.354824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:51.553 pt3 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.553 malloc4 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.553 [2024-11-06 09:10:50.401214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:51.553 [2024-11-06 09:10:50.401266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.553 [2024-11-06 09:10:50.401303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:51.553 [2024-11-06 09:10:50.401315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.553 [2024-11-06 09:10:50.403661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.553 [2024-11-06 09:10:50.403806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:51.553 pt4 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.553 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.553 [2024-11-06 09:10:50.413232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.553 [2024-11-06 09:10:50.415269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.553 [2024-11-06 09:10:50.415353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:51.553 [2024-11-06 09:10:50.415417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:51.553 [2024-11-06 09:10:50.415591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.553 [2024-11-06 09:10:50.415603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:51.553 [2024-11-06 09:10:50.415865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:51.553 [2024-11-06 09:10:50.416053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.553 [2024-11-06 09:10:50.416068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.554 [2024-11-06 09:10:50.416225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.554 "name": "raid_bdev1", 00:17:51.554 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:51.554 "strip_size_kb": 64, 00:17:51.554 "state": "online", 00:17:51.554 "raid_level": "raid0", 00:17:51.554 "superblock": true, 00:17:51.554 "num_base_bdevs": 4, 00:17:51.554 "num_base_bdevs_discovered": 4, 00:17:51.554 "num_base_bdevs_operational": 4, 00:17:51.554 "base_bdevs_list": [ 00:17:51.554 { 00:17:51.554 "name": "pt1", 00:17:51.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.554 "is_configured": true, 00:17:51.554 "data_offset": 2048, 00:17:51.554 "data_size": 63488 00:17:51.554 }, 00:17:51.554 { 00:17:51.554 "name": "pt2", 00:17:51.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.554 "is_configured": true, 00:17:51.554 "data_offset": 2048, 00:17:51.554 "data_size": 63488 00:17:51.554 }, 00:17:51.554 { 00:17:51.554 "name": "pt3", 00:17:51.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.554 "is_configured": true, 00:17:51.554 "data_offset": 2048, 00:17:51.554 "data_size": 63488 00:17:51.554 }, 00:17:51.554 { 00:17:51.554 "name": "pt4", 00:17:51.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:51.554 "is_configured": true, 00:17:51.554 "data_offset": 2048, 00:17:51.554 "data_size": 63488 00:17:51.554 } 00:17:51.554 ] 00:17:51.554 }' 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.554 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.121 [2024-11-06 09:10:50.892865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.121 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.121 "name": "raid_bdev1", 00:17:52.121 "aliases": [ 00:17:52.121 "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd" 00:17:52.121 ], 00:17:52.121 "product_name": "Raid Volume", 00:17:52.121 "block_size": 512, 00:17:52.121 "num_blocks": 253952, 00:17:52.121 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:52.121 "assigned_rate_limits": { 00:17:52.121 "rw_ios_per_sec": 0, 00:17:52.121 "rw_mbytes_per_sec": 0, 00:17:52.121 "r_mbytes_per_sec": 0, 00:17:52.121 "w_mbytes_per_sec": 0 00:17:52.121 }, 00:17:52.121 "claimed": false, 00:17:52.121 "zoned": false, 00:17:52.121 "supported_io_types": { 00:17:52.121 "read": true, 00:17:52.121 "write": true, 00:17:52.121 "unmap": true, 00:17:52.121 "flush": true, 00:17:52.121 "reset": true, 00:17:52.121 "nvme_admin": false, 00:17:52.121 "nvme_io": false, 00:17:52.121 "nvme_io_md": false, 00:17:52.121 "write_zeroes": true, 00:17:52.121 "zcopy": false, 00:17:52.121 "get_zone_info": false, 00:17:52.121 "zone_management": false, 00:17:52.121 "zone_append": false, 00:17:52.121 "compare": false, 00:17:52.121 "compare_and_write": false, 00:17:52.121 "abort": false, 00:17:52.121 "seek_hole": false, 00:17:52.121 "seek_data": false, 00:17:52.121 "copy": false, 00:17:52.121 "nvme_iov_md": false 00:17:52.121 }, 00:17:52.121 "memory_domains": [ 00:17:52.121 { 00:17:52.121 "dma_device_id": "system", 00:17:52.121 "dma_device_type": 1 00:17:52.121 }, 00:17:52.121 { 00:17:52.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.121 "dma_device_type": 2 00:17:52.121 }, 00:17:52.121 { 00:17:52.121 "dma_device_id": "system", 00:17:52.121 "dma_device_type": 1 00:17:52.121 }, 00:17:52.121 { 00:17:52.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.121 "dma_device_type": 2 00:17:52.121 }, 00:17:52.121 { 00:17:52.121 "dma_device_id": "system", 00:17:52.121 "dma_device_type": 1 00:17:52.121 }, 00:17:52.121 { 00:17:52.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.121 "dma_device_type": 2 00:17:52.121 }, 00:17:52.121 { 00:17:52.121 "dma_device_id": "system", 00:17:52.121 "dma_device_type": 1 00:17:52.122 }, 00:17:52.122 { 00:17:52.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.122 "dma_device_type": 2 00:17:52.122 } 00:17:52.122 ], 00:17:52.122 "driver_specific": { 00:17:52.122 "raid": { 00:17:52.122 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:52.122 "strip_size_kb": 64, 00:17:52.122 "state": "online", 00:17:52.122 "raid_level": "raid0", 00:17:52.122 "superblock": true, 00:17:52.122 "num_base_bdevs": 4, 00:17:52.122 "num_base_bdevs_discovered": 4, 00:17:52.122 "num_base_bdevs_operational": 4, 00:17:52.122 "base_bdevs_list": [ 00:17:52.122 { 00:17:52.122 "name": "pt1", 00:17:52.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.122 "is_configured": true, 00:17:52.122 "data_offset": 2048, 00:17:52.122 "data_size": 63488 00:17:52.122 }, 00:17:52.122 { 00:17:52.122 "name": "pt2", 00:17:52.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.122 "is_configured": true, 00:17:52.122 "data_offset": 2048, 00:17:52.122 "data_size": 63488 00:17:52.122 }, 00:17:52.122 { 00:17:52.122 "name": "pt3", 00:17:52.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.122 "is_configured": true, 00:17:52.122 "data_offset": 2048, 00:17:52.122 "data_size": 63488 00:17:52.122 }, 00:17:52.122 { 00:17:52.122 "name": "pt4", 00:17:52.122 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:52.122 "is_configured": true, 00:17:52.122 "data_offset": 2048, 00:17:52.122 "data_size": 63488 00:17:52.122 } 00:17:52.122 ] 00:17:52.122 } 00:17:52.122 } 00:17:52.122 }' 00:17:52.122 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.122 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:52.122 pt2 00:17:52.122 pt3 00:17:52.122 pt4' 00:17:52.122 09:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.122 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:52.381 [2024-11-06 09:10:51.224441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd ']' 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 [2024-11-06 09:10:51.276018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.382 [2024-11-06 09:10:51.276050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.382 [2024-11-06 09:10:51.276142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.382 [2024-11-06 09:10:51.276209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.382 [2024-11-06 09:10:51.276227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:52.382 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.641 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.641 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-11-06 09:10:51.447766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:52.642 [2024-11-06 09:10:51.449907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:52.642 [2024-11-06 09:10:51.449958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:52.642 [2024-11-06 09:10:51.449992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:52.642 [2024-11-06 09:10:51.450045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:52.642 [2024-11-06 09:10:51.450096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:52.642 [2024-11-06 09:10:51.450117] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:52.642 [2024-11-06 09:10:51.450139] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:52.642 [2024-11-06 09:10:51.450156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.642 [2024-11-06 09:10:51.450172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:52.642 request: 00:17:52.642 { 00:17:52.642 "name": "raid_bdev1", 00:17:52.642 "raid_level": "raid0", 00:17:52.642 "base_bdevs": [ 00:17:52.642 "malloc1", 00:17:52.642 "malloc2", 00:17:52.642 "malloc3", 00:17:52.642 "malloc4" 00:17:52.642 ], 00:17:52.642 "strip_size_kb": 64, 00:17:52.642 "superblock": false, 00:17:52.642 "method": "bdev_raid_create", 00:17:52.642 "req_id": 1 00:17:52.642 } 00:17:52.642 Got JSON-RPC error response 00:17:52.642 response: 00:17:52.642 { 00:17:52.642 "code": -17, 00:17:52.642 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:52.642 } 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-11-06 09:10:51.519827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.642 [2024-11-06 09:10:51.519896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.642 [2024-11-06 09:10:51.519916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:52.642 [2024-11-06 09:10:51.519931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.642 [2024-11-06 09:10:51.522421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.642 [2024-11-06 09:10:51.522578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.642 [2024-11-06 09:10:51.522683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.642 [2024-11-06 09:10:51.522762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.642 pt1 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.642 "name": "raid_bdev1", 00:17:52.642 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:52.642 "strip_size_kb": 64, 00:17:52.642 "state": "configuring", 00:17:52.642 "raid_level": "raid0", 00:17:52.642 "superblock": true, 00:17:52.642 "num_base_bdevs": 4, 00:17:52.642 "num_base_bdevs_discovered": 1, 00:17:52.642 "num_base_bdevs_operational": 4, 00:17:52.642 "base_bdevs_list": [ 00:17:52.642 { 00:17:52.642 "name": "pt1", 00:17:52.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.642 "is_configured": true, 00:17:52.642 "data_offset": 2048, 00:17:52.642 "data_size": 63488 00:17:52.642 }, 00:17:52.642 { 00:17:52.642 "name": null, 00:17:52.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.642 "is_configured": false, 00:17:52.642 "data_offset": 2048, 00:17:52.642 "data_size": 63488 00:17:52.642 }, 00:17:52.642 { 00:17:52.642 "name": null, 00:17:52.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.642 "is_configured": false, 00:17:52.642 "data_offset": 2048, 00:17:52.642 "data_size": 63488 00:17:52.642 }, 00:17:52.642 { 00:17:52.642 "name": null, 00:17:52.642 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:52.642 "is_configured": false, 00:17:52.642 "data_offset": 2048, 00:17:52.642 "data_size": 63488 00:17:52.642 } 00:17:52.642 ] 00:17:52.642 }' 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.642 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 [2024-11-06 09:10:51.955297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.208 [2024-11-06 09:10:51.955497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.208 [2024-11-06 09:10:51.955527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:53.208 [2024-11-06 09:10:51.955542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.208 [2024-11-06 09:10:51.956002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.208 [2024-11-06 09:10:51.956026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.208 [2024-11-06 09:10:51.956113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.208 [2024-11-06 09:10:51.956137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.208 pt2 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 [2024-11-06 09:10:51.963269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:53.208 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.209 09:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.209 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.209 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.209 "name": "raid_bdev1", 00:17:53.209 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:53.209 "strip_size_kb": 64, 00:17:53.209 "state": "configuring", 00:17:53.209 "raid_level": "raid0", 00:17:53.209 "superblock": true, 00:17:53.209 "num_base_bdevs": 4, 00:17:53.209 "num_base_bdevs_discovered": 1, 00:17:53.209 "num_base_bdevs_operational": 4, 00:17:53.209 "base_bdevs_list": [ 00:17:53.209 { 00:17:53.209 "name": "pt1", 00:17:53.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.209 "is_configured": true, 00:17:53.209 "data_offset": 2048, 00:17:53.209 "data_size": 63488 00:17:53.209 }, 00:17:53.209 { 00:17:53.209 "name": null, 00:17:53.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.209 "is_configured": false, 00:17:53.209 "data_offset": 0, 00:17:53.209 "data_size": 63488 00:17:53.209 }, 00:17:53.209 { 00:17:53.209 "name": null, 00:17:53.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.209 "is_configured": false, 00:17:53.209 "data_offset": 2048, 00:17:53.209 "data_size": 63488 00:17:53.209 }, 00:17:53.209 { 00:17:53.209 "name": null, 00:17:53.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:53.209 "is_configured": false, 00:17:53.209 "data_offset": 2048, 00:17:53.209 "data_size": 63488 00:17:53.209 } 00:17:53.209 ] 00:17:53.209 }' 00:17:53.209 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.209 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.469 [2024-11-06 09:10:52.378702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.469 [2024-11-06 09:10:52.378776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.469 [2024-11-06 09:10:52.378800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:53.469 [2024-11-06 09:10:52.378812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.469 [2024-11-06 09:10:52.379290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.469 [2024-11-06 09:10:52.379313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.469 [2024-11-06 09:10:52.379408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.469 [2024-11-06 09:10:52.379431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.469 pt2 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.469 [2024-11-06 09:10:52.390688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.469 [2024-11-06 09:10:52.390763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.469 [2024-11-06 09:10:52.390793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:53.469 [2024-11-06 09:10:52.390807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.469 [2024-11-06 09:10:52.391290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.469 [2024-11-06 09:10:52.391312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.469 [2024-11-06 09:10:52.391401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:53.469 [2024-11-06 09:10:52.391425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:53.469 pt3 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.469 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.469 [2024-11-06 09:10:52.402638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:53.469 [2024-11-06 09:10:52.402711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.469 [2024-11-06 09:10:52.402738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:53.470 [2024-11-06 09:10:52.402750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.470 [2024-11-06 09:10:52.403228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.470 [2024-11-06 09:10:52.403248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:53.470 [2024-11-06 09:10:52.403358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:53.470 [2024-11-06 09:10:52.403384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:53.470 [2024-11-06 09:10:52.403535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:53.470 [2024-11-06 09:10:52.403544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:53.470 [2024-11-06 09:10:52.403797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.470 [2024-11-06 09:10:52.403943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:53.470 [2024-11-06 09:10:52.403962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:53.470 [2024-11-06 09:10:52.404094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.470 pt4 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.470 "name": "raid_bdev1", 00:17:53.470 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:53.470 "strip_size_kb": 64, 00:17:53.470 "state": "online", 00:17:53.470 "raid_level": "raid0", 00:17:53.470 "superblock": true, 00:17:53.470 "num_base_bdevs": 4, 00:17:53.470 "num_base_bdevs_discovered": 4, 00:17:53.470 "num_base_bdevs_operational": 4, 00:17:53.470 "base_bdevs_list": [ 00:17:53.470 { 00:17:53.470 "name": "pt1", 00:17:53.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.470 "is_configured": true, 00:17:53.470 "data_offset": 2048, 00:17:53.470 "data_size": 63488 00:17:53.470 }, 00:17:53.470 { 00:17:53.470 "name": "pt2", 00:17:53.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.470 "is_configured": true, 00:17:53.470 "data_offset": 2048, 00:17:53.470 "data_size": 63488 00:17:53.470 }, 00:17:53.470 { 00:17:53.470 "name": "pt3", 00:17:53.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.470 "is_configured": true, 00:17:53.470 "data_offset": 2048, 00:17:53.470 "data_size": 63488 00:17:53.470 }, 00:17:53.470 { 00:17:53.470 "name": "pt4", 00:17:53.470 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:53.470 "is_configured": true, 00:17:53.470 "data_offset": 2048, 00:17:53.470 "data_size": 63488 00:17:53.470 } 00:17:53.470 ] 00:17:53.470 }' 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.470 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.037 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.038 [2024-11-06 09:10:52.854397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.038 "name": "raid_bdev1", 00:17:54.038 "aliases": [ 00:17:54.038 "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd" 00:17:54.038 ], 00:17:54.038 "product_name": "Raid Volume", 00:17:54.038 "block_size": 512, 00:17:54.038 "num_blocks": 253952, 00:17:54.038 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:54.038 "assigned_rate_limits": { 00:17:54.038 "rw_ios_per_sec": 0, 00:17:54.038 "rw_mbytes_per_sec": 0, 00:17:54.038 "r_mbytes_per_sec": 0, 00:17:54.038 "w_mbytes_per_sec": 0 00:17:54.038 }, 00:17:54.038 "claimed": false, 00:17:54.038 "zoned": false, 00:17:54.038 "supported_io_types": { 00:17:54.038 "read": true, 00:17:54.038 "write": true, 00:17:54.038 "unmap": true, 00:17:54.038 "flush": true, 00:17:54.038 "reset": true, 00:17:54.038 "nvme_admin": false, 00:17:54.038 "nvme_io": false, 00:17:54.038 "nvme_io_md": false, 00:17:54.038 "write_zeroes": true, 00:17:54.038 "zcopy": false, 00:17:54.038 "get_zone_info": false, 00:17:54.038 "zone_management": false, 00:17:54.038 "zone_append": false, 00:17:54.038 "compare": false, 00:17:54.038 "compare_and_write": false, 00:17:54.038 "abort": false, 00:17:54.038 "seek_hole": false, 00:17:54.038 "seek_data": false, 00:17:54.038 "copy": false, 00:17:54.038 "nvme_iov_md": false 00:17:54.038 }, 00:17:54.038 "memory_domains": [ 00:17:54.038 { 00:17:54.038 "dma_device_id": "system", 00:17:54.038 "dma_device_type": 1 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.038 "dma_device_type": 2 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "system", 00:17:54.038 "dma_device_type": 1 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.038 "dma_device_type": 2 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "system", 00:17:54.038 "dma_device_type": 1 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.038 "dma_device_type": 2 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "system", 00:17:54.038 "dma_device_type": 1 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.038 "dma_device_type": 2 00:17:54.038 } 00:17:54.038 ], 00:17:54.038 "driver_specific": { 00:17:54.038 "raid": { 00:17:54.038 "uuid": "39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd", 00:17:54.038 "strip_size_kb": 64, 00:17:54.038 "state": "online", 00:17:54.038 "raid_level": "raid0", 00:17:54.038 "superblock": true, 00:17:54.038 "num_base_bdevs": 4, 00:17:54.038 "num_base_bdevs_discovered": 4, 00:17:54.038 "num_base_bdevs_operational": 4, 00:17:54.038 "base_bdevs_list": [ 00:17:54.038 { 00:17:54.038 "name": "pt1", 00:17:54.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.038 "is_configured": true, 00:17:54.038 "data_offset": 2048, 00:17:54.038 "data_size": 63488 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "name": "pt2", 00:17:54.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.038 "is_configured": true, 00:17:54.038 "data_offset": 2048, 00:17:54.038 "data_size": 63488 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "name": "pt3", 00:17:54.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.038 "is_configured": true, 00:17:54.038 "data_offset": 2048, 00:17:54.038 "data_size": 63488 00:17:54.038 }, 00:17:54.038 { 00:17:54.038 "name": "pt4", 00:17:54.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:54.038 "is_configured": true, 00:17:54.038 "data_offset": 2048, 00:17:54.038 "data_size": 63488 00:17:54.038 } 00:17:54.038 ] 00:17:54.038 } 00:17:54.038 } 00:17:54.038 }' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:54.038 pt2 00:17:54.038 pt3 00:17:54.038 pt4' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.038 09:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.038 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:54.297 [2024-11-06 09:10:53.154135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd '!=' 39f553b1-d0f6-4d10-b0aa-1ce4b922dcbd ']' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70483 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70483 ']' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70483 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70483 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:54.297 killing process with pid 70483 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70483' 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70483 00:17:54.297 [2024-11-06 09:10:53.237622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.297 [2024-11-06 09:10:53.237712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.297 09:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70483 00:17:54.297 [2024-11-06 09:10:53.237812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.297 [2024-11-06 09:10:53.237824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:54.862 [2024-11-06 09:10:53.640759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.795 09:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:55.795 00:17:55.795 real 0m5.534s 00:17:55.795 user 0m7.925s 00:17:55.795 sys 0m1.111s 00:17:55.795 09:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.795 ************************************ 00:17:55.795 END TEST raid_superblock_test 00:17:55.795 ************************************ 00:17:55.795 09:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.795 09:10:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:17:55.795 09:10:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:55.795 09:10:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:55.795 09:10:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.795 ************************************ 00:17:55.795 START TEST raid_read_error_test 00:17:55.795 ************************************ 00:17:55.795 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:17:55.795 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:55.796 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:55.796 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UAVFzNLMKI 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70749 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70749 00:17:56.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 70749 ']' 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:56.054 09:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.054 [2024-11-06 09:10:54.949998] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:17:56.054 [2024-11-06 09:10:54.950333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70749 ] 00:17:56.313 [2024-11-06 09:10:55.128867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.313 [2024-11-06 09:10:55.243958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.572 [2024-11-06 09:10:55.464078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.572 [2024-11-06 09:10:55.464140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.831 BaseBdev1_malloc 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.831 true 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.831 [2024-11-06 09:10:55.846953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:56.831 [2024-11-06 09:10:55.847209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.831 [2024-11-06 09:10:55.847247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:56.831 [2024-11-06 09:10:55.847263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.831 [2024-11-06 09:10:55.849862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.831 [2024-11-06 09:10:55.849912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.831 BaseBdev1 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.831 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 BaseBdev2_malloc 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 true 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 [2024-11-06 09:10:55.915381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:57.090 [2024-11-06 09:10:55.915439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.090 [2024-11-06 09:10:55.915459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:57.090 [2024-11-06 09:10:55.915473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.090 [2024-11-06 09:10:55.917843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.090 [2024-11-06 09:10:55.918039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:57.090 BaseBdev2 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 BaseBdev3_malloc 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 true 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.090 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 [2024-11-06 09:10:55.992580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:57.091 [2024-11-06 09:10:55.992635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.091 [2024-11-06 09:10:55.992656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:57.091 [2024-11-06 09:10:55.992670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.091 [2024-11-06 09:10:55.995041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.091 [2024-11-06 09:10:55.995083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:57.091 BaseBdev3 00:17:57.091 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:57.091 09:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:57.091 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 BaseBdev4_malloc 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 true 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 [2024-11-06 09:10:56.057375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:57.091 [2024-11-06 09:10:56.057559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.091 [2024-11-06 09:10:56.057617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:57.091 [2024-11-06 09:10:56.057704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.091 [2024-11-06 09:10:56.060334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.091 [2024-11-06 09:10:56.060474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:57.091 BaseBdev4 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 [2024-11-06 09:10:56.069421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.091 [2024-11-06 09:10:56.071482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.091 [2024-11-06 09:10:56.071553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.091 [2024-11-06 09:10:56.071618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.091 [2024-11-06 09:10:56.071834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:57.091 [2024-11-06 09:10:56.071852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:57.091 [2024-11-06 09:10:56.072098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:57.091 [2024-11-06 09:10:56.072255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:57.091 [2024-11-06 09:10:56.072268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:57.091 [2024-11-06 09:10:56.072426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.350 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.350 "name": "raid_bdev1", 00:17:57.350 "uuid": "e380da37-c6e7-43d3-874f-9e061456353f", 00:17:57.350 "strip_size_kb": 64, 00:17:57.350 "state": "online", 00:17:57.350 "raid_level": "raid0", 00:17:57.350 "superblock": true, 00:17:57.350 "num_base_bdevs": 4, 00:17:57.350 "num_base_bdevs_discovered": 4, 00:17:57.350 "num_base_bdevs_operational": 4, 00:17:57.350 "base_bdevs_list": [ 00:17:57.350 { 00:17:57.350 "name": "BaseBdev1", 00:17:57.350 "uuid": "dcfc22bc-2c0b-5ee2-ad2e-b77e2b567974", 00:17:57.350 "is_configured": true, 00:17:57.350 "data_offset": 2048, 00:17:57.350 "data_size": 63488 00:17:57.350 }, 00:17:57.350 { 00:17:57.350 "name": "BaseBdev2", 00:17:57.350 "uuid": "7381487b-c1ce-56e6-a583-9f4f5fdf3234", 00:17:57.350 "is_configured": true, 00:17:57.350 "data_offset": 2048, 00:17:57.350 "data_size": 63488 00:17:57.350 }, 00:17:57.350 { 00:17:57.350 "name": "BaseBdev3", 00:17:57.350 "uuid": "ca58b36d-4ff1-5b17-a1d7-f08f792f879f", 00:17:57.350 "is_configured": true, 00:17:57.350 "data_offset": 2048, 00:17:57.350 "data_size": 63488 00:17:57.350 }, 00:17:57.350 { 00:17:57.350 "name": "BaseBdev4", 00:17:57.350 "uuid": "18a6ad12-da8c-5947-acc6-aaa2277d2eda", 00:17:57.350 "is_configured": true, 00:17:57.350 "data_offset": 2048, 00:17:57.350 "data_size": 63488 00:17:57.350 } 00:17:57.350 ] 00:17:57.350 }' 00:17:57.350 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.350 09:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.608 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:57.608 09:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:57.608 [2024-11-06 09:10:56.601972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.590 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.591 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.591 "name": "raid_bdev1", 00:17:58.591 "uuid": "e380da37-c6e7-43d3-874f-9e061456353f", 00:17:58.591 "strip_size_kb": 64, 00:17:58.591 "state": "online", 00:17:58.591 "raid_level": "raid0", 00:17:58.591 "superblock": true, 00:17:58.591 "num_base_bdevs": 4, 00:17:58.591 "num_base_bdevs_discovered": 4, 00:17:58.591 "num_base_bdevs_operational": 4, 00:17:58.591 "base_bdevs_list": [ 00:17:58.591 { 00:17:58.591 "name": "BaseBdev1", 00:17:58.591 "uuid": "dcfc22bc-2c0b-5ee2-ad2e-b77e2b567974", 00:17:58.591 "is_configured": true, 00:17:58.591 "data_offset": 2048, 00:17:58.591 "data_size": 63488 00:17:58.591 }, 00:17:58.591 { 00:17:58.591 "name": "BaseBdev2", 00:17:58.591 "uuid": "7381487b-c1ce-56e6-a583-9f4f5fdf3234", 00:17:58.591 "is_configured": true, 00:17:58.591 "data_offset": 2048, 00:17:58.591 "data_size": 63488 00:17:58.591 }, 00:17:58.591 { 00:17:58.591 "name": "BaseBdev3", 00:17:58.591 "uuid": "ca58b36d-4ff1-5b17-a1d7-f08f792f879f", 00:17:58.591 "is_configured": true, 00:17:58.591 "data_offset": 2048, 00:17:58.591 "data_size": 63488 00:17:58.591 }, 00:17:58.591 { 00:17:58.591 "name": "BaseBdev4", 00:17:58.591 "uuid": "18a6ad12-da8c-5947-acc6-aaa2277d2eda", 00:17:58.591 "is_configured": true, 00:17:58.591 "data_offset": 2048, 00:17:58.591 "data_size": 63488 00:17:58.591 } 00:17:58.591 ] 00:17:58.591 }' 00:17:58.591 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.591 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.184 [2024-11-06 09:10:57.948374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.184 [2024-11-06 09:10:57.948553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.184 [2024-11-06 09:10:57.951323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.184 [2024-11-06 09:10:57.951379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.184 [2024-11-06 09:10:57.951425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.184 [2024-11-06 09:10:57.951440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:59.184 { 00:17:59.184 "results": [ 00:17:59.184 { 00:17:59.184 "job": "raid_bdev1", 00:17:59.184 "core_mask": "0x1", 00:17:59.184 "workload": "randrw", 00:17:59.184 "percentage": 50, 00:17:59.184 "status": "finished", 00:17:59.184 "queue_depth": 1, 00:17:59.184 "io_size": 131072, 00:17:59.184 "runtime": 1.34669, 00:17:59.184 "iops": 16353.429519785548, 00:17:59.184 "mibps": 2044.1786899731935, 00:17:59.184 "io_failed": 1, 00:17:59.184 "io_timeout": 0, 00:17:59.184 "avg_latency_us": 84.76036043921418, 00:17:59.184 "min_latency_us": 26.730923694779115, 00:17:59.184 "max_latency_us": 1401.5228915662651 00:17:59.184 } 00:17:59.184 ], 00:17:59.184 "core_count": 1 00:17:59.184 } 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70749 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 70749 ']' 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 70749 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70749 00:17:59.184 killing process with pid 70749 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70749' 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 70749 00:17:59.184 [2024-11-06 09:10:57.986896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.184 09:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 70749 00:17:59.443 [2024-11-06 09:10:58.320536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UAVFzNLMKI 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:18:00.820 ************************************ 00:18:00.820 END TEST raid_read_error_test 00:18:00.820 ************************************ 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:18:00.820 00:18:00.820 real 0m4.677s 00:18:00.820 user 0m5.443s 00:18:00.820 sys 0m0.640s 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:00.820 09:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 09:10:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:18:00.820 09:10:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:00.820 09:10:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:00.820 09:10:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 ************************************ 00:18:00.820 START TEST raid_write_error_test 00:18:00.820 ************************************ 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.USUZwm5Fi8 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70889 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70889 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 70889 ']' 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.820 09:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 [2024-11-06 09:10:59.701477] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:00.820 [2024-11-06 09:10:59.701786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70889 ] 00:18:01.079 [2024-11-06 09:10:59.875255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.079 [2024-11-06 09:10:59.992154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.338 [2024-11-06 09:11:00.208593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.338 [2024-11-06 09:11:00.208658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.597 BaseBdev1_malloc 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.597 true 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.597 [2024-11-06 09:11:00.589155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:01.597 [2024-11-06 09:11:00.589214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.597 [2024-11-06 09:11:00.589236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:01.597 [2024-11-06 09:11:00.589250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.597 [2024-11-06 09:11:00.591638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.597 [2024-11-06 09:11:00.591680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:01.597 BaseBdev1 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.597 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 BaseBdev2_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 true 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 [2024-11-06 09:11:00.654580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:01.856 [2024-11-06 09:11:00.654636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.856 [2024-11-06 09:11:00.654655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:01.856 [2024-11-06 09:11:00.654668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.856 [2024-11-06 09:11:00.656975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.856 [2024-11-06 09:11:00.657135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:01.856 BaseBdev2 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 BaseBdev3_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 true 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 [2024-11-06 09:11:00.741241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:01.856 [2024-11-06 09:11:00.741419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.856 [2024-11-06 09:11:00.741445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:01.856 [2024-11-06 09:11:00.741460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.856 [2024-11-06 09:11:00.743792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.856 [2024-11-06 09:11:00.743835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:01.856 BaseBdev3 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 BaseBdev4_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 true 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 [2024-11-06 09:11:00.809821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:01.856 [2024-11-06 09:11:00.809878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.856 [2024-11-06 09:11:00.809900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:01.856 [2024-11-06 09:11:00.809914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.856 [2024-11-06 09:11:00.812270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.856 [2024-11-06 09:11:00.812327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:01.856 BaseBdev4 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 [2024-11-06 09:11:00.821859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.856 [2024-11-06 09:11:00.824019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.856 [2024-11-06 09:11:00.824090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.856 [2024-11-06 09:11:00.824156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.856 [2024-11-06 09:11:00.824372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:01.856 [2024-11-06 09:11:00.824393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:01.856 [2024-11-06 09:11:00.824644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:01.856 [2024-11-06 09:11:00.824789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:01.856 [2024-11-06 09:11:00.824801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:01.856 [2024-11-06 09:11:00.824944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.856 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.856 "name": "raid_bdev1", 00:18:01.856 "uuid": "c7954f46-c47d-4cad-9833-1948ee69df39", 00:18:01.856 "strip_size_kb": 64, 00:18:01.856 "state": "online", 00:18:01.856 "raid_level": "raid0", 00:18:01.856 "superblock": true, 00:18:01.856 "num_base_bdevs": 4, 00:18:01.856 "num_base_bdevs_discovered": 4, 00:18:01.856 "num_base_bdevs_operational": 4, 00:18:01.856 "base_bdevs_list": [ 00:18:01.856 { 00:18:01.856 "name": "BaseBdev1", 00:18:01.856 "uuid": "d487b70c-c4a9-5b39-afa1-bc11a21aaf4f", 00:18:01.856 "is_configured": true, 00:18:01.856 "data_offset": 2048, 00:18:01.856 "data_size": 63488 00:18:01.856 }, 00:18:01.856 { 00:18:01.856 "name": "BaseBdev2", 00:18:01.856 "uuid": "4c73526a-4670-5025-bade-c4d71aa48a3e", 00:18:01.856 "is_configured": true, 00:18:01.856 "data_offset": 2048, 00:18:01.856 "data_size": 63488 00:18:01.856 }, 00:18:01.856 { 00:18:01.856 "name": "BaseBdev3", 00:18:01.856 "uuid": "b48f3f70-9a9f-57bb-ab66-86cf38f90556", 00:18:01.856 "is_configured": true, 00:18:01.856 "data_offset": 2048, 00:18:01.856 "data_size": 63488 00:18:01.856 }, 00:18:01.857 { 00:18:01.857 "name": "BaseBdev4", 00:18:01.857 "uuid": "87ee036e-4be7-5655-a72d-06adb50e8e8b", 00:18:01.857 "is_configured": true, 00:18:01.857 "data_offset": 2048, 00:18:01.857 "data_size": 63488 00:18:01.857 } 00:18:01.857 ] 00:18:01.857 }' 00:18:01.857 09:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.857 09:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.423 09:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:02.423 09:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:02.423 [2024-11-06 09:11:01.302979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.358 "name": "raid_bdev1", 00:18:03.358 "uuid": "c7954f46-c47d-4cad-9833-1948ee69df39", 00:18:03.358 "strip_size_kb": 64, 00:18:03.358 "state": "online", 00:18:03.358 "raid_level": "raid0", 00:18:03.358 "superblock": true, 00:18:03.358 "num_base_bdevs": 4, 00:18:03.358 "num_base_bdevs_discovered": 4, 00:18:03.358 "num_base_bdevs_operational": 4, 00:18:03.358 "base_bdevs_list": [ 00:18:03.358 { 00:18:03.358 "name": "BaseBdev1", 00:18:03.358 "uuid": "d487b70c-c4a9-5b39-afa1-bc11a21aaf4f", 00:18:03.358 "is_configured": true, 00:18:03.358 "data_offset": 2048, 00:18:03.358 "data_size": 63488 00:18:03.358 }, 00:18:03.358 { 00:18:03.358 "name": "BaseBdev2", 00:18:03.358 "uuid": "4c73526a-4670-5025-bade-c4d71aa48a3e", 00:18:03.358 "is_configured": true, 00:18:03.358 "data_offset": 2048, 00:18:03.358 "data_size": 63488 00:18:03.358 }, 00:18:03.358 { 00:18:03.358 "name": "BaseBdev3", 00:18:03.358 "uuid": "b48f3f70-9a9f-57bb-ab66-86cf38f90556", 00:18:03.358 "is_configured": true, 00:18:03.358 "data_offset": 2048, 00:18:03.358 "data_size": 63488 00:18:03.358 }, 00:18:03.358 { 00:18:03.358 "name": "BaseBdev4", 00:18:03.358 "uuid": "87ee036e-4be7-5655-a72d-06adb50e8e8b", 00:18:03.358 "is_configured": true, 00:18:03.358 "data_offset": 2048, 00:18:03.358 "data_size": 63488 00:18:03.358 } 00:18:03.358 ] 00:18:03.358 }' 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.358 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.925 [2024-11-06 09:11:02.669847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.925 [2024-11-06 09:11:02.670008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.925 [2024-11-06 09:11:02.672730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.925 [2024-11-06 09:11:02.672900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.925 [2024-11-06 09:11:02.672956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.925 [2024-11-06 09:11:02.672971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:03.925 { 00:18:03.925 "results": [ 00:18:03.925 { 00:18:03.925 "job": "raid_bdev1", 00:18:03.925 "core_mask": "0x1", 00:18:03.925 "workload": "randrw", 00:18:03.925 "percentage": 50, 00:18:03.925 "status": "finished", 00:18:03.925 "queue_depth": 1, 00:18:03.925 "io_size": 131072, 00:18:03.925 "runtime": 1.366825, 00:18:03.925 "iops": 16392.73498801968, 00:18:03.925 "mibps": 2049.09187350246, 00:18:03.925 "io_failed": 1, 00:18:03.925 "io_timeout": 0, 00:18:03.925 "avg_latency_us": 84.56974866037095, 00:18:03.925 "min_latency_us": 26.936546184738955, 00:18:03.925 "max_latency_us": 1421.2626506024096 00:18:03.925 } 00:18:03.925 ], 00:18:03.925 "core_count": 1 00:18:03.925 } 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70889 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 70889 ']' 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 70889 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70889 00:18:03.925 killing process with pid 70889 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70889' 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 70889 00:18:03.925 [2024-11-06 09:11:02.720047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.925 09:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 70889 00:18:04.183 [2024-11-06 09:11:03.052568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.USUZwm5Fi8 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:18:05.559 00:18:05.559 real 0m4.659s 00:18:05.559 user 0m5.422s 00:18:05.559 sys 0m0.608s 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:05.559 09:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.559 ************************************ 00:18:05.559 END TEST raid_write_error_test 00:18:05.559 ************************************ 00:18:05.559 09:11:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:05.559 09:11:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:05.559 09:11:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:05.559 09:11:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:05.559 09:11:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.559 ************************************ 00:18:05.559 START TEST raid_state_function_test 00:18:05.559 ************************************ 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:05.559 Process raid pid: 71039 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71039 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71039' 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71039 00:18:05.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71039 ']' 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.559 09:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.559 [2024-11-06 09:11:04.422984] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:05.559 [2024-11-06 09:11:04.423292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.818 [2024-11-06 09:11:04.605221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.818 [2024-11-06 09:11:04.722904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.076 [2024-11-06 09:11:04.919147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.076 [2024-11-06 09:11:04.919308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.335 [2024-11-06 09:11:05.263356] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.335 [2024-11-06 09:11:05.263413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.335 [2024-11-06 09:11:05.263425] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.335 [2024-11-06 09:11:05.263438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.335 [2024-11-06 09:11:05.263452] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:06.335 [2024-11-06 09:11:05.263464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:06.335 [2024-11-06 09:11:05.263472] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:06.335 [2024-11-06 09:11:05.263484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.335 "name": "Existed_Raid", 00:18:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.335 "strip_size_kb": 64, 00:18:06.335 "state": "configuring", 00:18:06.335 "raid_level": "concat", 00:18:06.335 "superblock": false, 00:18:06.335 "num_base_bdevs": 4, 00:18:06.335 "num_base_bdevs_discovered": 0, 00:18:06.335 "num_base_bdevs_operational": 4, 00:18:06.335 "base_bdevs_list": [ 00:18:06.335 { 00:18:06.335 "name": "BaseBdev1", 00:18:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.335 "is_configured": false, 00:18:06.335 "data_offset": 0, 00:18:06.335 "data_size": 0 00:18:06.335 }, 00:18:06.335 { 00:18:06.335 "name": "BaseBdev2", 00:18:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.335 "is_configured": false, 00:18:06.335 "data_offset": 0, 00:18:06.335 "data_size": 0 00:18:06.335 }, 00:18:06.335 { 00:18:06.335 "name": "BaseBdev3", 00:18:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.335 "is_configured": false, 00:18:06.335 "data_offset": 0, 00:18:06.335 "data_size": 0 00:18:06.335 }, 00:18:06.335 { 00:18:06.335 "name": "BaseBdev4", 00:18:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.335 "is_configured": false, 00:18:06.335 "data_offset": 0, 00:18:06.335 "data_size": 0 00:18:06.335 } 00:18:06.335 ] 00:18:06.335 }' 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.335 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.915 [2024-11-06 09:11:05.714655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.915 [2024-11-06 09:11:05.714715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.915 [2024-11-06 09:11:05.726630] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.915 [2024-11-06 09:11:05.726787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.915 [2024-11-06 09:11:05.726872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.915 [2024-11-06 09:11:05.726917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.915 [2024-11-06 09:11:05.726987] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:06.915 [2024-11-06 09:11:05.727028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:06.915 [2024-11-06 09:11:05.727056] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:06.915 [2024-11-06 09:11:05.727129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.915 [2024-11-06 09:11:05.772371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.915 BaseBdev1 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.915 [ 00:18:06.915 { 00:18:06.915 "name": "BaseBdev1", 00:18:06.915 "aliases": [ 00:18:06.915 "986c959a-0319-4300-aeb1-b08a93462884" 00:18:06.915 ], 00:18:06.915 "product_name": "Malloc disk", 00:18:06.915 "block_size": 512, 00:18:06.915 "num_blocks": 65536, 00:18:06.915 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:06.915 "assigned_rate_limits": { 00:18:06.915 "rw_ios_per_sec": 0, 00:18:06.915 "rw_mbytes_per_sec": 0, 00:18:06.915 "r_mbytes_per_sec": 0, 00:18:06.915 "w_mbytes_per_sec": 0 00:18:06.915 }, 00:18:06.915 "claimed": true, 00:18:06.915 "claim_type": "exclusive_write", 00:18:06.915 "zoned": false, 00:18:06.915 "supported_io_types": { 00:18:06.915 "read": true, 00:18:06.915 "write": true, 00:18:06.915 "unmap": true, 00:18:06.915 "flush": true, 00:18:06.915 "reset": true, 00:18:06.915 "nvme_admin": false, 00:18:06.915 "nvme_io": false, 00:18:06.915 "nvme_io_md": false, 00:18:06.915 "write_zeroes": true, 00:18:06.915 "zcopy": true, 00:18:06.915 "get_zone_info": false, 00:18:06.915 "zone_management": false, 00:18:06.915 "zone_append": false, 00:18:06.915 "compare": false, 00:18:06.915 "compare_and_write": false, 00:18:06.915 "abort": true, 00:18:06.915 "seek_hole": false, 00:18:06.915 "seek_data": false, 00:18:06.915 "copy": true, 00:18:06.915 "nvme_iov_md": false 00:18:06.915 }, 00:18:06.915 "memory_domains": [ 00:18:06.915 { 00:18:06.915 "dma_device_id": "system", 00:18:06.915 "dma_device_type": 1 00:18:06.915 }, 00:18:06.915 { 00:18:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.915 "dma_device_type": 2 00:18:06.915 } 00:18:06.915 ], 00:18:06.915 "driver_specific": {} 00:18:06.915 } 00:18:06.915 ] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.915 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.916 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.916 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.916 "name": "Existed_Raid", 00:18:06.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.916 "strip_size_kb": 64, 00:18:06.916 "state": "configuring", 00:18:06.916 "raid_level": "concat", 00:18:06.916 "superblock": false, 00:18:06.916 "num_base_bdevs": 4, 00:18:06.916 "num_base_bdevs_discovered": 1, 00:18:06.916 "num_base_bdevs_operational": 4, 00:18:06.916 "base_bdevs_list": [ 00:18:06.916 { 00:18:06.916 "name": "BaseBdev1", 00:18:06.916 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:06.916 "is_configured": true, 00:18:06.916 "data_offset": 0, 00:18:06.916 "data_size": 65536 00:18:06.916 }, 00:18:06.916 { 00:18:06.916 "name": "BaseBdev2", 00:18:06.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.916 "is_configured": false, 00:18:06.916 "data_offset": 0, 00:18:06.916 "data_size": 0 00:18:06.916 }, 00:18:06.916 { 00:18:06.916 "name": "BaseBdev3", 00:18:06.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.916 "is_configured": false, 00:18:06.916 "data_offset": 0, 00:18:06.916 "data_size": 0 00:18:06.916 }, 00:18:06.916 { 00:18:06.916 "name": "BaseBdev4", 00:18:06.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.916 "is_configured": false, 00:18:06.916 "data_offset": 0, 00:18:06.916 "data_size": 0 00:18:06.916 } 00:18:06.916 ] 00:18:06.916 }' 00:18:06.916 09:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.916 09:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.174 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:07.174 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.174 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.174 [2024-11-06 09:11:06.196091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.174 [2024-11-06 09:11:06.196296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:07.174 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.174 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.174 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.175 [2024-11-06 09:11:06.204126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.175 [2024-11-06 09:11:06.206199] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.175 [2024-11-06 09:11:06.206244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.175 [2024-11-06 09:11:06.206257] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.175 [2024-11-06 09:11:06.206283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.175 [2024-11-06 09:11:06.206292] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.175 [2024-11-06 09:11:06.206304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.175 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.433 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.433 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.433 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.433 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.433 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.433 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.433 "name": "Existed_Raid", 00:18:07.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.433 "strip_size_kb": 64, 00:18:07.433 "state": "configuring", 00:18:07.433 "raid_level": "concat", 00:18:07.433 "superblock": false, 00:18:07.433 "num_base_bdevs": 4, 00:18:07.433 "num_base_bdevs_discovered": 1, 00:18:07.433 "num_base_bdevs_operational": 4, 00:18:07.434 "base_bdevs_list": [ 00:18:07.434 { 00:18:07.434 "name": "BaseBdev1", 00:18:07.434 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:07.434 "is_configured": true, 00:18:07.434 "data_offset": 0, 00:18:07.434 "data_size": 65536 00:18:07.434 }, 00:18:07.434 { 00:18:07.434 "name": "BaseBdev2", 00:18:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.434 "is_configured": false, 00:18:07.434 "data_offset": 0, 00:18:07.434 "data_size": 0 00:18:07.434 }, 00:18:07.434 { 00:18:07.434 "name": "BaseBdev3", 00:18:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.434 "is_configured": false, 00:18:07.434 "data_offset": 0, 00:18:07.434 "data_size": 0 00:18:07.434 }, 00:18:07.434 { 00:18:07.434 "name": "BaseBdev4", 00:18:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.434 "is_configured": false, 00:18:07.434 "data_offset": 0, 00:18:07.434 "data_size": 0 00:18:07.434 } 00:18:07.434 ] 00:18:07.434 }' 00:18:07.434 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.434 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.693 [2024-11-06 09:11:06.677798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.693 BaseBdev2 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.693 [ 00:18:07.693 { 00:18:07.693 "name": "BaseBdev2", 00:18:07.693 "aliases": [ 00:18:07.693 "0b62ddb2-4a37-42e3-835d-3e9808243408" 00:18:07.693 ], 00:18:07.693 "product_name": "Malloc disk", 00:18:07.693 "block_size": 512, 00:18:07.693 "num_blocks": 65536, 00:18:07.693 "uuid": "0b62ddb2-4a37-42e3-835d-3e9808243408", 00:18:07.693 "assigned_rate_limits": { 00:18:07.693 "rw_ios_per_sec": 0, 00:18:07.693 "rw_mbytes_per_sec": 0, 00:18:07.693 "r_mbytes_per_sec": 0, 00:18:07.693 "w_mbytes_per_sec": 0 00:18:07.693 }, 00:18:07.693 "claimed": true, 00:18:07.693 "claim_type": "exclusive_write", 00:18:07.693 "zoned": false, 00:18:07.693 "supported_io_types": { 00:18:07.693 "read": true, 00:18:07.693 "write": true, 00:18:07.693 "unmap": true, 00:18:07.693 "flush": true, 00:18:07.693 "reset": true, 00:18:07.693 "nvme_admin": false, 00:18:07.693 "nvme_io": false, 00:18:07.693 "nvme_io_md": false, 00:18:07.693 "write_zeroes": true, 00:18:07.693 "zcopy": true, 00:18:07.693 "get_zone_info": false, 00:18:07.693 "zone_management": false, 00:18:07.693 "zone_append": false, 00:18:07.693 "compare": false, 00:18:07.693 "compare_and_write": false, 00:18:07.693 "abort": true, 00:18:07.693 "seek_hole": false, 00:18:07.693 "seek_data": false, 00:18:07.693 "copy": true, 00:18:07.693 "nvme_iov_md": false 00:18:07.693 }, 00:18:07.693 "memory_domains": [ 00:18:07.693 { 00:18:07.693 "dma_device_id": "system", 00:18:07.693 "dma_device_type": 1 00:18:07.693 }, 00:18:07.693 { 00:18:07.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.693 "dma_device_type": 2 00:18:07.693 } 00:18:07.693 ], 00:18:07.693 "driver_specific": {} 00:18:07.693 } 00:18:07.693 ] 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.693 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.952 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.952 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.952 "name": "Existed_Raid", 00:18:07.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.952 "strip_size_kb": 64, 00:18:07.952 "state": "configuring", 00:18:07.952 "raid_level": "concat", 00:18:07.952 "superblock": false, 00:18:07.952 "num_base_bdevs": 4, 00:18:07.952 "num_base_bdevs_discovered": 2, 00:18:07.952 "num_base_bdevs_operational": 4, 00:18:07.952 "base_bdevs_list": [ 00:18:07.952 { 00:18:07.952 "name": "BaseBdev1", 00:18:07.952 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:07.952 "is_configured": true, 00:18:07.952 "data_offset": 0, 00:18:07.952 "data_size": 65536 00:18:07.952 }, 00:18:07.952 { 00:18:07.952 "name": "BaseBdev2", 00:18:07.952 "uuid": "0b62ddb2-4a37-42e3-835d-3e9808243408", 00:18:07.952 "is_configured": true, 00:18:07.952 "data_offset": 0, 00:18:07.952 "data_size": 65536 00:18:07.952 }, 00:18:07.952 { 00:18:07.952 "name": "BaseBdev3", 00:18:07.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.952 "is_configured": false, 00:18:07.952 "data_offset": 0, 00:18:07.952 "data_size": 0 00:18:07.952 }, 00:18:07.952 { 00:18:07.952 "name": "BaseBdev4", 00:18:07.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.952 "is_configured": false, 00:18:07.952 "data_offset": 0, 00:18:07.952 "data_size": 0 00:18:07.952 } 00:18:07.952 ] 00:18:07.952 }' 00:18:07.952 09:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.952 09:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 [2024-11-06 09:11:07.170545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.211 BaseBdev3 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 [ 00:18:08.211 { 00:18:08.211 "name": "BaseBdev3", 00:18:08.211 "aliases": [ 00:18:08.211 "60bd0e18-816a-49ec-9902-63e0406b6b98" 00:18:08.211 ], 00:18:08.211 "product_name": "Malloc disk", 00:18:08.211 "block_size": 512, 00:18:08.211 "num_blocks": 65536, 00:18:08.211 "uuid": "60bd0e18-816a-49ec-9902-63e0406b6b98", 00:18:08.211 "assigned_rate_limits": { 00:18:08.211 "rw_ios_per_sec": 0, 00:18:08.211 "rw_mbytes_per_sec": 0, 00:18:08.211 "r_mbytes_per_sec": 0, 00:18:08.211 "w_mbytes_per_sec": 0 00:18:08.211 }, 00:18:08.211 "claimed": true, 00:18:08.211 "claim_type": "exclusive_write", 00:18:08.211 "zoned": false, 00:18:08.211 "supported_io_types": { 00:18:08.211 "read": true, 00:18:08.211 "write": true, 00:18:08.211 "unmap": true, 00:18:08.211 "flush": true, 00:18:08.211 "reset": true, 00:18:08.211 "nvme_admin": false, 00:18:08.211 "nvme_io": false, 00:18:08.211 "nvme_io_md": false, 00:18:08.211 "write_zeroes": true, 00:18:08.211 "zcopy": true, 00:18:08.211 "get_zone_info": false, 00:18:08.211 "zone_management": false, 00:18:08.211 "zone_append": false, 00:18:08.211 "compare": false, 00:18:08.211 "compare_and_write": false, 00:18:08.211 "abort": true, 00:18:08.211 "seek_hole": false, 00:18:08.211 "seek_data": false, 00:18:08.211 "copy": true, 00:18:08.211 "nvme_iov_md": false 00:18:08.211 }, 00:18:08.211 "memory_domains": [ 00:18:08.211 { 00:18:08.211 "dma_device_id": "system", 00:18:08.211 "dma_device_type": 1 00:18:08.211 }, 00:18:08.211 { 00:18:08.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.211 "dma_device_type": 2 00:18:08.211 } 00:18:08.211 ], 00:18:08.211 "driver_specific": {} 00:18:08.211 } 00:18:08.211 ] 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:08.211 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.212 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.471 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.471 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.471 "name": "Existed_Raid", 00:18:08.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.471 "strip_size_kb": 64, 00:18:08.471 "state": "configuring", 00:18:08.471 "raid_level": "concat", 00:18:08.471 "superblock": false, 00:18:08.471 "num_base_bdevs": 4, 00:18:08.471 "num_base_bdevs_discovered": 3, 00:18:08.471 "num_base_bdevs_operational": 4, 00:18:08.471 "base_bdevs_list": [ 00:18:08.471 { 00:18:08.471 "name": "BaseBdev1", 00:18:08.471 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:08.471 "is_configured": true, 00:18:08.471 "data_offset": 0, 00:18:08.471 "data_size": 65536 00:18:08.471 }, 00:18:08.471 { 00:18:08.471 "name": "BaseBdev2", 00:18:08.471 "uuid": "0b62ddb2-4a37-42e3-835d-3e9808243408", 00:18:08.471 "is_configured": true, 00:18:08.471 "data_offset": 0, 00:18:08.471 "data_size": 65536 00:18:08.471 }, 00:18:08.471 { 00:18:08.471 "name": "BaseBdev3", 00:18:08.471 "uuid": "60bd0e18-816a-49ec-9902-63e0406b6b98", 00:18:08.471 "is_configured": true, 00:18:08.471 "data_offset": 0, 00:18:08.471 "data_size": 65536 00:18:08.471 }, 00:18:08.471 { 00:18:08.471 "name": "BaseBdev4", 00:18:08.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.471 "is_configured": false, 00:18:08.471 "data_offset": 0, 00:18:08.471 "data_size": 0 00:18:08.471 } 00:18:08.471 ] 00:18:08.471 }' 00:18:08.471 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.471 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.730 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:08.730 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.731 [2024-11-06 09:11:07.688762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:08.731 [2024-11-06 09:11:07.688820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:08.731 [2024-11-06 09:11:07.688830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:08.731 [2024-11-06 09:11:07.689118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:08.731 [2024-11-06 09:11:07.689314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:08.731 [2024-11-06 09:11:07.689340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:08.731 [2024-11-06 09:11:07.689625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.731 BaseBdev4 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.731 [ 00:18:08.731 { 00:18:08.731 "name": "BaseBdev4", 00:18:08.731 "aliases": [ 00:18:08.731 "99d39ec1-8b17-46cc-91a4-34d9b9d6dd9a" 00:18:08.731 ], 00:18:08.731 "product_name": "Malloc disk", 00:18:08.731 "block_size": 512, 00:18:08.731 "num_blocks": 65536, 00:18:08.731 "uuid": "99d39ec1-8b17-46cc-91a4-34d9b9d6dd9a", 00:18:08.731 "assigned_rate_limits": { 00:18:08.731 "rw_ios_per_sec": 0, 00:18:08.731 "rw_mbytes_per_sec": 0, 00:18:08.731 "r_mbytes_per_sec": 0, 00:18:08.731 "w_mbytes_per_sec": 0 00:18:08.731 }, 00:18:08.731 "claimed": true, 00:18:08.731 "claim_type": "exclusive_write", 00:18:08.731 "zoned": false, 00:18:08.731 "supported_io_types": { 00:18:08.731 "read": true, 00:18:08.731 "write": true, 00:18:08.731 "unmap": true, 00:18:08.731 "flush": true, 00:18:08.731 "reset": true, 00:18:08.731 "nvme_admin": false, 00:18:08.731 "nvme_io": false, 00:18:08.731 "nvme_io_md": false, 00:18:08.731 "write_zeroes": true, 00:18:08.731 "zcopy": true, 00:18:08.731 "get_zone_info": false, 00:18:08.731 "zone_management": false, 00:18:08.731 "zone_append": false, 00:18:08.731 "compare": false, 00:18:08.731 "compare_and_write": false, 00:18:08.731 "abort": true, 00:18:08.731 "seek_hole": false, 00:18:08.731 "seek_data": false, 00:18:08.731 "copy": true, 00:18:08.731 "nvme_iov_md": false 00:18:08.731 }, 00:18:08.731 "memory_domains": [ 00:18:08.731 { 00:18:08.731 "dma_device_id": "system", 00:18:08.731 "dma_device_type": 1 00:18:08.731 }, 00:18:08.731 { 00:18:08.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.731 "dma_device_type": 2 00:18:08.731 } 00:18:08.731 ], 00:18:08.731 "driver_specific": {} 00:18:08.731 } 00:18:08.731 ] 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.731 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.990 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.990 "name": "Existed_Raid", 00:18:08.990 "uuid": "a81cc159-bc0c-43f4-b71e-b3881d24ffbd", 00:18:08.991 "strip_size_kb": 64, 00:18:08.991 "state": "online", 00:18:08.991 "raid_level": "concat", 00:18:08.991 "superblock": false, 00:18:08.991 "num_base_bdevs": 4, 00:18:08.991 "num_base_bdevs_discovered": 4, 00:18:08.991 "num_base_bdevs_operational": 4, 00:18:08.991 "base_bdevs_list": [ 00:18:08.991 { 00:18:08.991 "name": "BaseBdev1", 00:18:08.991 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:08.991 "is_configured": true, 00:18:08.991 "data_offset": 0, 00:18:08.991 "data_size": 65536 00:18:08.991 }, 00:18:08.991 { 00:18:08.991 "name": "BaseBdev2", 00:18:08.991 "uuid": "0b62ddb2-4a37-42e3-835d-3e9808243408", 00:18:08.991 "is_configured": true, 00:18:08.991 "data_offset": 0, 00:18:08.991 "data_size": 65536 00:18:08.991 }, 00:18:08.991 { 00:18:08.991 "name": "BaseBdev3", 00:18:08.991 "uuid": "60bd0e18-816a-49ec-9902-63e0406b6b98", 00:18:08.991 "is_configured": true, 00:18:08.991 "data_offset": 0, 00:18:08.991 "data_size": 65536 00:18:08.991 }, 00:18:08.991 { 00:18:08.991 "name": "BaseBdev4", 00:18:08.991 "uuid": "99d39ec1-8b17-46cc-91a4-34d9b9d6dd9a", 00:18:08.991 "is_configured": true, 00:18:08.991 "data_offset": 0, 00:18:08.991 "data_size": 65536 00:18:08.991 } 00:18:08.991 ] 00:18:08.991 }' 00:18:08.991 09:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.991 09:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.249 [2024-11-06 09:11:08.148736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.249 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.249 "name": "Existed_Raid", 00:18:09.249 "aliases": [ 00:18:09.249 "a81cc159-bc0c-43f4-b71e-b3881d24ffbd" 00:18:09.249 ], 00:18:09.249 "product_name": "Raid Volume", 00:18:09.249 "block_size": 512, 00:18:09.249 "num_blocks": 262144, 00:18:09.249 "uuid": "a81cc159-bc0c-43f4-b71e-b3881d24ffbd", 00:18:09.249 "assigned_rate_limits": { 00:18:09.249 "rw_ios_per_sec": 0, 00:18:09.249 "rw_mbytes_per_sec": 0, 00:18:09.249 "r_mbytes_per_sec": 0, 00:18:09.249 "w_mbytes_per_sec": 0 00:18:09.249 }, 00:18:09.249 "claimed": false, 00:18:09.249 "zoned": false, 00:18:09.249 "supported_io_types": { 00:18:09.249 "read": true, 00:18:09.249 "write": true, 00:18:09.249 "unmap": true, 00:18:09.249 "flush": true, 00:18:09.249 "reset": true, 00:18:09.249 "nvme_admin": false, 00:18:09.249 "nvme_io": false, 00:18:09.249 "nvme_io_md": false, 00:18:09.249 "write_zeroes": true, 00:18:09.249 "zcopy": false, 00:18:09.249 "get_zone_info": false, 00:18:09.249 "zone_management": false, 00:18:09.249 "zone_append": false, 00:18:09.249 "compare": false, 00:18:09.249 "compare_and_write": false, 00:18:09.249 "abort": false, 00:18:09.249 "seek_hole": false, 00:18:09.249 "seek_data": false, 00:18:09.249 "copy": false, 00:18:09.249 "nvme_iov_md": false 00:18:09.249 }, 00:18:09.249 "memory_domains": [ 00:18:09.249 { 00:18:09.249 "dma_device_id": "system", 00:18:09.249 "dma_device_type": 1 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.249 "dma_device_type": 2 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "system", 00:18:09.249 "dma_device_type": 1 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.249 "dma_device_type": 2 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "system", 00:18:09.249 "dma_device_type": 1 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.249 "dma_device_type": 2 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "system", 00:18:09.249 "dma_device_type": 1 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.249 "dma_device_type": 2 00:18:09.249 } 00:18:09.249 ], 00:18:09.249 "driver_specific": { 00:18:09.249 "raid": { 00:18:09.249 "uuid": "a81cc159-bc0c-43f4-b71e-b3881d24ffbd", 00:18:09.249 "strip_size_kb": 64, 00:18:09.249 "state": "online", 00:18:09.249 "raid_level": "concat", 00:18:09.249 "superblock": false, 00:18:09.249 "num_base_bdevs": 4, 00:18:09.249 "num_base_bdevs_discovered": 4, 00:18:09.249 "num_base_bdevs_operational": 4, 00:18:09.249 "base_bdevs_list": [ 00:18:09.249 { 00:18:09.249 "name": "BaseBdev1", 00:18:09.249 "uuid": "986c959a-0319-4300-aeb1-b08a93462884", 00:18:09.249 "is_configured": true, 00:18:09.249 "data_offset": 0, 00:18:09.249 "data_size": 65536 00:18:09.249 }, 00:18:09.249 { 00:18:09.249 "name": "BaseBdev2", 00:18:09.249 "uuid": "0b62ddb2-4a37-42e3-835d-3e9808243408", 00:18:09.249 "is_configured": true, 00:18:09.249 "data_offset": 0, 00:18:09.249 "data_size": 65536 00:18:09.249 }, 00:18:09.250 { 00:18:09.250 "name": "BaseBdev3", 00:18:09.250 "uuid": "60bd0e18-816a-49ec-9902-63e0406b6b98", 00:18:09.250 "is_configured": true, 00:18:09.250 "data_offset": 0, 00:18:09.250 "data_size": 65536 00:18:09.250 }, 00:18:09.250 { 00:18:09.250 "name": "BaseBdev4", 00:18:09.250 "uuid": "99d39ec1-8b17-46cc-91a4-34d9b9d6dd9a", 00:18:09.250 "is_configured": true, 00:18:09.250 "data_offset": 0, 00:18:09.250 "data_size": 65536 00:18:09.250 } 00:18:09.250 ] 00:18:09.250 } 00:18:09.250 } 00:18:09.250 }' 00:18:09.250 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.250 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:09.250 BaseBdev2 00:18:09.250 BaseBdev3 00:18:09.250 BaseBdev4' 00:18:09.250 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.508 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.509 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.509 [2024-11-06 09:11:08.460439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.509 [2024-11-06 09:11:08.460472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.509 [2024-11-06 09:11:08.460522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.767 "name": "Existed_Raid", 00:18:09.767 "uuid": "a81cc159-bc0c-43f4-b71e-b3881d24ffbd", 00:18:09.767 "strip_size_kb": 64, 00:18:09.767 "state": "offline", 00:18:09.767 "raid_level": "concat", 00:18:09.767 "superblock": false, 00:18:09.767 "num_base_bdevs": 4, 00:18:09.767 "num_base_bdevs_discovered": 3, 00:18:09.767 "num_base_bdevs_operational": 3, 00:18:09.767 "base_bdevs_list": [ 00:18:09.767 { 00:18:09.767 "name": null, 00:18:09.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.767 "is_configured": false, 00:18:09.767 "data_offset": 0, 00:18:09.767 "data_size": 65536 00:18:09.767 }, 00:18:09.767 { 00:18:09.767 "name": "BaseBdev2", 00:18:09.767 "uuid": "0b62ddb2-4a37-42e3-835d-3e9808243408", 00:18:09.767 "is_configured": true, 00:18:09.767 "data_offset": 0, 00:18:09.767 "data_size": 65536 00:18:09.767 }, 00:18:09.767 { 00:18:09.767 "name": "BaseBdev3", 00:18:09.767 "uuid": "60bd0e18-816a-49ec-9902-63e0406b6b98", 00:18:09.767 "is_configured": true, 00:18:09.767 "data_offset": 0, 00:18:09.767 "data_size": 65536 00:18:09.767 }, 00:18:09.767 { 00:18:09.767 "name": "BaseBdev4", 00:18:09.767 "uuid": "99d39ec1-8b17-46cc-91a4-34d9b9d6dd9a", 00:18:09.767 "is_configured": true, 00:18:09.767 "data_offset": 0, 00:18:09.767 "data_size": 65536 00:18:09.767 } 00:18:09.767 ] 00:18:09.767 }' 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.767 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.035 09:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.035 [2024-11-06 09:11:08.995726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.294 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 [2024-11-06 09:11:09.143829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.295 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 [2024-11-06 09:11:09.295265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:10.295 [2024-11-06 09:11:09.295331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 BaseBdev2 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 [ 00:18:10.554 { 00:18:10.554 "name": "BaseBdev2", 00:18:10.554 "aliases": [ 00:18:10.554 "ea7a3606-ecce-4e47-acd6-fa55033ff16b" 00:18:10.554 ], 00:18:10.554 "product_name": "Malloc disk", 00:18:10.554 "block_size": 512, 00:18:10.554 "num_blocks": 65536, 00:18:10.554 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:10.554 "assigned_rate_limits": { 00:18:10.554 "rw_ios_per_sec": 0, 00:18:10.554 "rw_mbytes_per_sec": 0, 00:18:10.554 "r_mbytes_per_sec": 0, 00:18:10.554 "w_mbytes_per_sec": 0 00:18:10.554 }, 00:18:10.554 "claimed": false, 00:18:10.554 "zoned": false, 00:18:10.554 "supported_io_types": { 00:18:10.554 "read": true, 00:18:10.554 "write": true, 00:18:10.554 "unmap": true, 00:18:10.554 "flush": true, 00:18:10.554 "reset": true, 00:18:10.554 "nvme_admin": false, 00:18:10.554 "nvme_io": false, 00:18:10.554 "nvme_io_md": false, 00:18:10.554 "write_zeroes": true, 00:18:10.554 "zcopy": true, 00:18:10.554 "get_zone_info": false, 00:18:10.554 "zone_management": false, 00:18:10.554 "zone_append": false, 00:18:10.554 "compare": false, 00:18:10.554 "compare_and_write": false, 00:18:10.554 "abort": true, 00:18:10.554 "seek_hole": false, 00:18:10.554 "seek_data": false, 00:18:10.554 "copy": true, 00:18:10.554 "nvme_iov_md": false 00:18:10.554 }, 00:18:10.554 "memory_domains": [ 00:18:10.554 { 00:18:10.554 "dma_device_id": "system", 00:18:10.554 "dma_device_type": 1 00:18:10.554 }, 00:18:10.554 { 00:18:10.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.554 "dma_device_type": 2 00:18:10.554 } 00:18:10.554 ], 00:18:10.554 "driver_specific": {} 00:18:10.554 } 00:18:10.554 ] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:10.554 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.555 BaseBdev3 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.555 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.814 [ 00:18:10.814 { 00:18:10.814 "name": "BaseBdev3", 00:18:10.814 "aliases": [ 00:18:10.814 "61866061-00f9-4692-9f5a-8a782b873385" 00:18:10.814 ], 00:18:10.814 "product_name": "Malloc disk", 00:18:10.814 "block_size": 512, 00:18:10.814 "num_blocks": 65536, 00:18:10.814 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:10.814 "assigned_rate_limits": { 00:18:10.814 "rw_ios_per_sec": 0, 00:18:10.814 "rw_mbytes_per_sec": 0, 00:18:10.814 "r_mbytes_per_sec": 0, 00:18:10.814 "w_mbytes_per_sec": 0 00:18:10.814 }, 00:18:10.814 "claimed": false, 00:18:10.814 "zoned": false, 00:18:10.814 "supported_io_types": { 00:18:10.814 "read": true, 00:18:10.814 "write": true, 00:18:10.814 "unmap": true, 00:18:10.814 "flush": true, 00:18:10.814 "reset": true, 00:18:10.814 "nvme_admin": false, 00:18:10.814 "nvme_io": false, 00:18:10.814 "nvme_io_md": false, 00:18:10.814 "write_zeroes": true, 00:18:10.814 "zcopy": true, 00:18:10.814 "get_zone_info": false, 00:18:10.814 "zone_management": false, 00:18:10.814 "zone_append": false, 00:18:10.814 "compare": false, 00:18:10.814 "compare_and_write": false, 00:18:10.814 "abort": true, 00:18:10.814 "seek_hole": false, 00:18:10.814 "seek_data": false, 00:18:10.814 "copy": true, 00:18:10.814 "nvme_iov_md": false 00:18:10.814 }, 00:18:10.814 "memory_domains": [ 00:18:10.814 { 00:18:10.814 "dma_device_id": "system", 00:18:10.814 "dma_device_type": 1 00:18:10.814 }, 00:18:10.814 { 00:18:10.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.814 "dma_device_type": 2 00:18:10.814 } 00:18:10.814 ], 00:18:10.814 "driver_specific": {} 00:18:10.814 } 00:18:10.814 ] 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.814 BaseBdev4 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.814 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.814 [ 00:18:10.814 { 00:18:10.814 "name": "BaseBdev4", 00:18:10.814 "aliases": [ 00:18:10.814 "85e81215-358b-417f-8ec0-3ac0e4aca9db" 00:18:10.814 ], 00:18:10.814 "product_name": "Malloc disk", 00:18:10.814 "block_size": 512, 00:18:10.814 "num_blocks": 65536, 00:18:10.814 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:10.814 "assigned_rate_limits": { 00:18:10.814 "rw_ios_per_sec": 0, 00:18:10.814 "rw_mbytes_per_sec": 0, 00:18:10.814 "r_mbytes_per_sec": 0, 00:18:10.814 "w_mbytes_per_sec": 0 00:18:10.814 }, 00:18:10.814 "claimed": false, 00:18:10.814 "zoned": false, 00:18:10.814 "supported_io_types": { 00:18:10.814 "read": true, 00:18:10.814 "write": true, 00:18:10.814 "unmap": true, 00:18:10.814 "flush": true, 00:18:10.814 "reset": true, 00:18:10.814 "nvme_admin": false, 00:18:10.814 "nvme_io": false, 00:18:10.814 "nvme_io_md": false, 00:18:10.814 "write_zeroes": true, 00:18:10.814 "zcopy": true, 00:18:10.814 "get_zone_info": false, 00:18:10.814 "zone_management": false, 00:18:10.814 "zone_append": false, 00:18:10.814 "compare": false, 00:18:10.814 "compare_and_write": false, 00:18:10.814 "abort": true, 00:18:10.814 "seek_hole": false, 00:18:10.814 "seek_data": false, 00:18:10.814 "copy": true, 00:18:10.814 "nvme_iov_md": false 00:18:10.814 }, 00:18:10.814 "memory_domains": [ 00:18:10.814 { 00:18:10.814 "dma_device_id": "system", 00:18:10.814 "dma_device_type": 1 00:18:10.814 }, 00:18:10.814 { 00:18:10.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.814 "dma_device_type": 2 00:18:10.814 } 00:18:10.814 ], 00:18:10.814 "driver_specific": {} 00:18:10.814 } 00:18:10.814 ] 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.815 [2024-11-06 09:11:09.725444] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.815 [2024-11-06 09:11:09.725617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.815 [2024-11-06 09:11:09.725757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.815 [2024-11-06 09:11:09.728066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.815 [2024-11-06 09:11:09.728136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.815 "name": "Existed_Raid", 00:18:10.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.815 "strip_size_kb": 64, 00:18:10.815 "state": "configuring", 00:18:10.815 "raid_level": "concat", 00:18:10.815 "superblock": false, 00:18:10.815 "num_base_bdevs": 4, 00:18:10.815 "num_base_bdevs_discovered": 3, 00:18:10.815 "num_base_bdevs_operational": 4, 00:18:10.815 "base_bdevs_list": [ 00:18:10.815 { 00:18:10.815 "name": "BaseBdev1", 00:18:10.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.815 "is_configured": false, 00:18:10.815 "data_offset": 0, 00:18:10.815 "data_size": 0 00:18:10.815 }, 00:18:10.815 { 00:18:10.815 "name": "BaseBdev2", 00:18:10.815 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:10.815 "is_configured": true, 00:18:10.815 "data_offset": 0, 00:18:10.815 "data_size": 65536 00:18:10.815 }, 00:18:10.815 { 00:18:10.815 "name": "BaseBdev3", 00:18:10.815 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:10.815 "is_configured": true, 00:18:10.815 "data_offset": 0, 00:18:10.815 "data_size": 65536 00:18:10.815 }, 00:18:10.815 { 00:18:10.815 "name": "BaseBdev4", 00:18:10.815 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:10.815 "is_configured": true, 00:18:10.815 "data_offset": 0, 00:18:10.815 "data_size": 65536 00:18:10.815 } 00:18:10.815 ] 00:18:10.815 }' 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.815 09:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.381 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.382 [2024-11-06 09:11:10.132853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.382 "name": "Existed_Raid", 00:18:11.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.382 "strip_size_kb": 64, 00:18:11.382 "state": "configuring", 00:18:11.382 "raid_level": "concat", 00:18:11.382 "superblock": false, 00:18:11.382 "num_base_bdevs": 4, 00:18:11.382 "num_base_bdevs_discovered": 2, 00:18:11.382 "num_base_bdevs_operational": 4, 00:18:11.382 "base_bdevs_list": [ 00:18:11.382 { 00:18:11.382 "name": "BaseBdev1", 00:18:11.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.382 "is_configured": false, 00:18:11.382 "data_offset": 0, 00:18:11.382 "data_size": 0 00:18:11.382 }, 00:18:11.382 { 00:18:11.382 "name": null, 00:18:11.382 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:11.382 "is_configured": false, 00:18:11.382 "data_offset": 0, 00:18:11.382 "data_size": 65536 00:18:11.382 }, 00:18:11.382 { 00:18:11.382 "name": "BaseBdev3", 00:18:11.382 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:11.382 "is_configured": true, 00:18:11.382 "data_offset": 0, 00:18:11.382 "data_size": 65536 00:18:11.382 }, 00:18:11.382 { 00:18:11.382 "name": "BaseBdev4", 00:18:11.382 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:11.382 "is_configured": true, 00:18:11.382 "data_offset": 0, 00:18:11.382 "data_size": 65536 00:18:11.382 } 00:18:11.382 ] 00:18:11.382 }' 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.382 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 [2024-11-06 09:11:10.644960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.641 BaseBdev1 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.641 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 [ 00:18:11.641 { 00:18:11.641 "name": "BaseBdev1", 00:18:11.641 "aliases": [ 00:18:11.641 "1f552257-8121-4e12-8965-c51d03e8e3d3" 00:18:11.641 ], 00:18:11.641 "product_name": "Malloc disk", 00:18:11.641 "block_size": 512, 00:18:11.641 "num_blocks": 65536, 00:18:11.641 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:11.641 "assigned_rate_limits": { 00:18:11.641 "rw_ios_per_sec": 0, 00:18:11.641 "rw_mbytes_per_sec": 0, 00:18:11.641 "r_mbytes_per_sec": 0, 00:18:11.641 "w_mbytes_per_sec": 0 00:18:11.641 }, 00:18:11.641 "claimed": true, 00:18:11.641 "claim_type": "exclusive_write", 00:18:11.641 "zoned": false, 00:18:11.641 "supported_io_types": { 00:18:11.641 "read": true, 00:18:11.900 "write": true, 00:18:11.900 "unmap": true, 00:18:11.900 "flush": true, 00:18:11.900 "reset": true, 00:18:11.900 "nvme_admin": false, 00:18:11.900 "nvme_io": false, 00:18:11.900 "nvme_io_md": false, 00:18:11.900 "write_zeroes": true, 00:18:11.900 "zcopy": true, 00:18:11.900 "get_zone_info": false, 00:18:11.900 "zone_management": false, 00:18:11.900 "zone_append": false, 00:18:11.900 "compare": false, 00:18:11.900 "compare_and_write": false, 00:18:11.900 "abort": true, 00:18:11.900 "seek_hole": false, 00:18:11.900 "seek_data": false, 00:18:11.900 "copy": true, 00:18:11.900 "nvme_iov_md": false 00:18:11.900 }, 00:18:11.900 "memory_domains": [ 00:18:11.900 { 00:18:11.900 "dma_device_id": "system", 00:18:11.900 "dma_device_type": 1 00:18:11.900 }, 00:18:11.900 { 00:18:11.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.900 "dma_device_type": 2 00:18:11.900 } 00:18:11.900 ], 00:18:11.900 "driver_specific": {} 00:18:11.900 } 00:18:11.900 ] 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.900 "name": "Existed_Raid", 00:18:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.900 "strip_size_kb": 64, 00:18:11.900 "state": "configuring", 00:18:11.900 "raid_level": "concat", 00:18:11.900 "superblock": false, 00:18:11.900 "num_base_bdevs": 4, 00:18:11.900 "num_base_bdevs_discovered": 3, 00:18:11.900 "num_base_bdevs_operational": 4, 00:18:11.900 "base_bdevs_list": [ 00:18:11.900 { 00:18:11.900 "name": "BaseBdev1", 00:18:11.900 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:11.900 "is_configured": true, 00:18:11.900 "data_offset": 0, 00:18:11.900 "data_size": 65536 00:18:11.900 }, 00:18:11.900 { 00:18:11.900 "name": null, 00:18:11.900 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:11.900 "is_configured": false, 00:18:11.900 "data_offset": 0, 00:18:11.900 "data_size": 65536 00:18:11.900 }, 00:18:11.900 { 00:18:11.900 "name": "BaseBdev3", 00:18:11.900 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:11.900 "is_configured": true, 00:18:11.900 "data_offset": 0, 00:18:11.900 "data_size": 65536 00:18:11.900 }, 00:18:11.900 { 00:18:11.900 "name": "BaseBdev4", 00:18:11.900 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:11.900 "is_configured": true, 00:18:11.900 "data_offset": 0, 00:18:11.900 "data_size": 65536 00:18:11.900 } 00:18:11.900 ] 00:18:11.900 }' 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.900 09:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.159 [2024-11-06 09:11:11.172427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.159 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.417 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.417 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.417 "name": "Existed_Raid", 00:18:12.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.417 "strip_size_kb": 64, 00:18:12.417 "state": "configuring", 00:18:12.417 "raid_level": "concat", 00:18:12.417 "superblock": false, 00:18:12.417 "num_base_bdevs": 4, 00:18:12.417 "num_base_bdevs_discovered": 2, 00:18:12.417 "num_base_bdevs_operational": 4, 00:18:12.417 "base_bdevs_list": [ 00:18:12.417 { 00:18:12.417 "name": "BaseBdev1", 00:18:12.417 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:12.417 "is_configured": true, 00:18:12.417 "data_offset": 0, 00:18:12.417 "data_size": 65536 00:18:12.417 }, 00:18:12.417 { 00:18:12.417 "name": null, 00:18:12.417 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:12.417 "is_configured": false, 00:18:12.417 "data_offset": 0, 00:18:12.417 "data_size": 65536 00:18:12.417 }, 00:18:12.417 { 00:18:12.417 "name": null, 00:18:12.417 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:12.417 "is_configured": false, 00:18:12.417 "data_offset": 0, 00:18:12.417 "data_size": 65536 00:18:12.417 }, 00:18:12.417 { 00:18:12.417 "name": "BaseBdev4", 00:18:12.417 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:12.417 "is_configured": true, 00:18:12.417 "data_offset": 0, 00:18:12.417 "data_size": 65536 00:18:12.417 } 00:18:12.417 ] 00:18:12.417 }' 00:18:12.417 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.417 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.675 [2024-11-06 09:11:11.652429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.675 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.675 "name": "Existed_Raid", 00:18:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.675 "strip_size_kb": 64, 00:18:12.675 "state": "configuring", 00:18:12.675 "raid_level": "concat", 00:18:12.675 "superblock": false, 00:18:12.675 "num_base_bdevs": 4, 00:18:12.675 "num_base_bdevs_discovered": 3, 00:18:12.675 "num_base_bdevs_operational": 4, 00:18:12.675 "base_bdevs_list": [ 00:18:12.675 { 00:18:12.675 "name": "BaseBdev1", 00:18:12.675 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:12.675 "is_configured": true, 00:18:12.675 "data_offset": 0, 00:18:12.675 "data_size": 65536 00:18:12.675 }, 00:18:12.675 { 00:18:12.675 "name": null, 00:18:12.675 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:12.675 "is_configured": false, 00:18:12.675 "data_offset": 0, 00:18:12.675 "data_size": 65536 00:18:12.675 }, 00:18:12.675 { 00:18:12.675 "name": "BaseBdev3", 00:18:12.675 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:12.675 "is_configured": true, 00:18:12.675 "data_offset": 0, 00:18:12.675 "data_size": 65536 00:18:12.675 }, 00:18:12.675 { 00:18:12.675 "name": "BaseBdev4", 00:18:12.675 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:12.675 "is_configured": true, 00:18:12.675 "data_offset": 0, 00:18:12.675 "data_size": 65536 00:18:12.675 } 00:18:12.675 ] 00:18:12.675 }' 00:18:12.676 09:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.676 09:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.264 [2024-11-06 09:11:12.092260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.264 "name": "Existed_Raid", 00:18:13.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.264 "strip_size_kb": 64, 00:18:13.264 "state": "configuring", 00:18:13.264 "raid_level": "concat", 00:18:13.264 "superblock": false, 00:18:13.264 "num_base_bdevs": 4, 00:18:13.264 "num_base_bdevs_discovered": 2, 00:18:13.264 "num_base_bdevs_operational": 4, 00:18:13.264 "base_bdevs_list": [ 00:18:13.264 { 00:18:13.264 "name": null, 00:18:13.264 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:13.264 "is_configured": false, 00:18:13.264 "data_offset": 0, 00:18:13.264 "data_size": 65536 00:18:13.264 }, 00:18:13.264 { 00:18:13.264 "name": null, 00:18:13.264 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:13.264 "is_configured": false, 00:18:13.264 "data_offset": 0, 00:18:13.264 "data_size": 65536 00:18:13.264 }, 00:18:13.264 { 00:18:13.264 "name": "BaseBdev3", 00:18:13.264 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:13.264 "is_configured": true, 00:18:13.264 "data_offset": 0, 00:18:13.264 "data_size": 65536 00:18:13.264 }, 00:18:13.264 { 00:18:13.264 "name": "BaseBdev4", 00:18:13.264 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:13.264 "is_configured": true, 00:18:13.264 "data_offset": 0, 00:18:13.264 "data_size": 65536 00:18:13.264 } 00:18:13.264 ] 00:18:13.264 }' 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.264 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 [2024-11-06 09:11:12.637445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.837 "name": "Existed_Raid", 00:18:13.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.837 "strip_size_kb": 64, 00:18:13.837 "state": "configuring", 00:18:13.837 "raid_level": "concat", 00:18:13.837 "superblock": false, 00:18:13.837 "num_base_bdevs": 4, 00:18:13.837 "num_base_bdevs_discovered": 3, 00:18:13.837 "num_base_bdevs_operational": 4, 00:18:13.837 "base_bdevs_list": [ 00:18:13.837 { 00:18:13.837 "name": null, 00:18:13.837 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:13.837 "is_configured": false, 00:18:13.837 "data_offset": 0, 00:18:13.837 "data_size": 65536 00:18:13.837 }, 00:18:13.837 { 00:18:13.837 "name": "BaseBdev2", 00:18:13.837 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:13.837 "is_configured": true, 00:18:13.837 "data_offset": 0, 00:18:13.837 "data_size": 65536 00:18:13.837 }, 00:18:13.837 { 00:18:13.837 "name": "BaseBdev3", 00:18:13.837 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:13.837 "is_configured": true, 00:18:13.837 "data_offset": 0, 00:18:13.837 "data_size": 65536 00:18:13.837 }, 00:18:13.837 { 00:18:13.837 "name": "BaseBdev4", 00:18:13.837 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:13.837 "is_configured": true, 00:18:13.837 "data_offset": 0, 00:18:13.837 "data_size": 65536 00:18:13.837 } 00:18:13.837 ] 00:18:13.837 }' 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.837 09:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1f552257-8121-4e12-8965-c51d03e8e3d3 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.096 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.355 [2024-11-06 09:11:13.147116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:14.355 [2024-11-06 09:11:13.147395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:14.355 [2024-11-06 09:11:13.147441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:14.355 [2024-11-06 09:11:13.147830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:14.355 [2024-11-06 09:11:13.148096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:14.355 [2024-11-06 09:11:13.148206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:14.355 [2024-11-06 09:11:13.148542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.355 NewBaseBdev 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.355 [ 00:18:14.355 { 00:18:14.355 "name": "NewBaseBdev", 00:18:14.355 "aliases": [ 00:18:14.355 "1f552257-8121-4e12-8965-c51d03e8e3d3" 00:18:14.355 ], 00:18:14.355 "product_name": "Malloc disk", 00:18:14.355 "block_size": 512, 00:18:14.355 "num_blocks": 65536, 00:18:14.355 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:14.355 "assigned_rate_limits": { 00:18:14.355 "rw_ios_per_sec": 0, 00:18:14.355 "rw_mbytes_per_sec": 0, 00:18:14.355 "r_mbytes_per_sec": 0, 00:18:14.355 "w_mbytes_per_sec": 0 00:18:14.355 }, 00:18:14.355 "claimed": true, 00:18:14.355 "claim_type": "exclusive_write", 00:18:14.355 "zoned": false, 00:18:14.355 "supported_io_types": { 00:18:14.355 "read": true, 00:18:14.355 "write": true, 00:18:14.355 "unmap": true, 00:18:14.355 "flush": true, 00:18:14.355 "reset": true, 00:18:14.355 "nvme_admin": false, 00:18:14.355 "nvme_io": false, 00:18:14.355 "nvme_io_md": false, 00:18:14.355 "write_zeroes": true, 00:18:14.355 "zcopy": true, 00:18:14.355 "get_zone_info": false, 00:18:14.355 "zone_management": false, 00:18:14.355 "zone_append": false, 00:18:14.355 "compare": false, 00:18:14.355 "compare_and_write": false, 00:18:14.355 "abort": true, 00:18:14.355 "seek_hole": false, 00:18:14.355 "seek_data": false, 00:18:14.355 "copy": true, 00:18:14.355 "nvme_iov_md": false 00:18:14.355 }, 00:18:14.355 "memory_domains": [ 00:18:14.355 { 00:18:14.355 "dma_device_id": "system", 00:18:14.355 "dma_device_type": 1 00:18:14.355 }, 00:18:14.355 { 00:18:14.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.355 "dma_device_type": 2 00:18:14.355 } 00:18:14.355 ], 00:18:14.355 "driver_specific": {} 00:18:14.355 } 00:18:14.355 ] 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.355 "name": "Existed_Raid", 00:18:14.355 "uuid": "d5040fcf-9628-4abb-ae43-d684d101fa8a", 00:18:14.355 "strip_size_kb": 64, 00:18:14.355 "state": "online", 00:18:14.355 "raid_level": "concat", 00:18:14.355 "superblock": false, 00:18:14.355 "num_base_bdevs": 4, 00:18:14.355 "num_base_bdevs_discovered": 4, 00:18:14.355 "num_base_bdevs_operational": 4, 00:18:14.355 "base_bdevs_list": [ 00:18:14.355 { 00:18:14.355 "name": "NewBaseBdev", 00:18:14.355 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:14.355 "is_configured": true, 00:18:14.355 "data_offset": 0, 00:18:14.355 "data_size": 65536 00:18:14.355 }, 00:18:14.355 { 00:18:14.355 "name": "BaseBdev2", 00:18:14.355 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:14.355 "is_configured": true, 00:18:14.355 "data_offset": 0, 00:18:14.355 "data_size": 65536 00:18:14.355 }, 00:18:14.355 { 00:18:14.355 "name": "BaseBdev3", 00:18:14.355 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:14.355 "is_configured": true, 00:18:14.355 "data_offset": 0, 00:18:14.355 "data_size": 65536 00:18:14.355 }, 00:18:14.355 { 00:18:14.355 "name": "BaseBdev4", 00:18:14.355 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:14.355 "is_configured": true, 00:18:14.355 "data_offset": 0, 00:18:14.355 "data_size": 65536 00:18:14.355 } 00:18:14.355 ] 00:18:14.355 }' 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.355 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.615 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.615 [2024-11-06 09:11:13.622864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.875 "name": "Existed_Raid", 00:18:14.875 "aliases": [ 00:18:14.875 "d5040fcf-9628-4abb-ae43-d684d101fa8a" 00:18:14.875 ], 00:18:14.875 "product_name": "Raid Volume", 00:18:14.875 "block_size": 512, 00:18:14.875 "num_blocks": 262144, 00:18:14.875 "uuid": "d5040fcf-9628-4abb-ae43-d684d101fa8a", 00:18:14.875 "assigned_rate_limits": { 00:18:14.875 "rw_ios_per_sec": 0, 00:18:14.875 "rw_mbytes_per_sec": 0, 00:18:14.875 "r_mbytes_per_sec": 0, 00:18:14.875 "w_mbytes_per_sec": 0 00:18:14.875 }, 00:18:14.875 "claimed": false, 00:18:14.875 "zoned": false, 00:18:14.875 "supported_io_types": { 00:18:14.875 "read": true, 00:18:14.875 "write": true, 00:18:14.875 "unmap": true, 00:18:14.875 "flush": true, 00:18:14.875 "reset": true, 00:18:14.875 "nvme_admin": false, 00:18:14.875 "nvme_io": false, 00:18:14.875 "nvme_io_md": false, 00:18:14.875 "write_zeroes": true, 00:18:14.875 "zcopy": false, 00:18:14.875 "get_zone_info": false, 00:18:14.875 "zone_management": false, 00:18:14.875 "zone_append": false, 00:18:14.875 "compare": false, 00:18:14.875 "compare_and_write": false, 00:18:14.875 "abort": false, 00:18:14.875 "seek_hole": false, 00:18:14.875 "seek_data": false, 00:18:14.875 "copy": false, 00:18:14.875 "nvme_iov_md": false 00:18:14.875 }, 00:18:14.875 "memory_domains": [ 00:18:14.875 { 00:18:14.875 "dma_device_id": "system", 00:18:14.875 "dma_device_type": 1 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.875 "dma_device_type": 2 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "system", 00:18:14.875 "dma_device_type": 1 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.875 "dma_device_type": 2 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "system", 00:18:14.875 "dma_device_type": 1 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.875 "dma_device_type": 2 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "system", 00:18:14.875 "dma_device_type": 1 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.875 "dma_device_type": 2 00:18:14.875 } 00:18:14.875 ], 00:18:14.875 "driver_specific": { 00:18:14.875 "raid": { 00:18:14.875 "uuid": "d5040fcf-9628-4abb-ae43-d684d101fa8a", 00:18:14.875 "strip_size_kb": 64, 00:18:14.875 "state": "online", 00:18:14.875 "raid_level": "concat", 00:18:14.875 "superblock": false, 00:18:14.875 "num_base_bdevs": 4, 00:18:14.875 "num_base_bdevs_discovered": 4, 00:18:14.875 "num_base_bdevs_operational": 4, 00:18:14.875 "base_bdevs_list": [ 00:18:14.875 { 00:18:14.875 "name": "NewBaseBdev", 00:18:14.875 "uuid": "1f552257-8121-4e12-8965-c51d03e8e3d3", 00:18:14.875 "is_configured": true, 00:18:14.875 "data_offset": 0, 00:18:14.875 "data_size": 65536 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "name": "BaseBdev2", 00:18:14.875 "uuid": "ea7a3606-ecce-4e47-acd6-fa55033ff16b", 00:18:14.875 "is_configured": true, 00:18:14.875 "data_offset": 0, 00:18:14.875 "data_size": 65536 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "name": "BaseBdev3", 00:18:14.875 "uuid": "61866061-00f9-4692-9f5a-8a782b873385", 00:18:14.875 "is_configured": true, 00:18:14.875 "data_offset": 0, 00:18:14.875 "data_size": 65536 00:18:14.875 }, 00:18:14.875 { 00:18:14.875 "name": "BaseBdev4", 00:18:14.875 "uuid": "85e81215-358b-417f-8ec0-3ac0e4aca9db", 00:18:14.875 "is_configured": true, 00:18:14.875 "data_offset": 0, 00:18:14.875 "data_size": 65536 00:18:14.875 } 00:18:14.875 ] 00:18:14.875 } 00:18:14.875 } 00:18:14.875 }' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:14.875 BaseBdev2 00:18:14.875 BaseBdev3 00:18:14.875 BaseBdev4' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.875 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.133 [2024-11-06 09:11:13.926245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.133 [2024-11-06 09:11:13.926391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.133 [2024-11-06 09:11:13.926478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.133 [2024-11-06 09:11:13.926551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.133 [2024-11-06 09:11:13.926564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71039 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71039 ']' 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71039 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71039 00:18:15.133 killing process with pid 71039 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71039' 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71039 00:18:15.133 [2024-11-06 09:11:13.967746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.133 09:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71039 00:18:15.392 [2024-11-06 09:11:14.368368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.768 ************************************ 00:18:16.768 END TEST raid_state_function_test 00:18:16.768 ************************************ 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:16.768 00:18:16.768 real 0m11.164s 00:18:16.768 user 0m17.642s 00:18:16.768 sys 0m2.317s 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.768 09:11:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:16.768 09:11:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:16.768 09:11:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:16.768 09:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.768 ************************************ 00:18:16.768 START TEST raid_state_function_test_sb 00:18:16.768 ************************************ 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:16.768 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:16.769 Process raid pid: 71708 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71708 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71708' 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71708 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 71708 ']' 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:16.769 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.769 [2024-11-06 09:11:15.663424] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:16.769 [2024-11-06 09:11:15.663708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.028 [2024-11-06 09:11:15.849818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.028 [2024-11-06 09:11:15.970079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.287 [2024-11-06 09:11:16.190891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.287 [2024-11-06 09:11:16.190925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.545 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.545 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.546 [2024-11-06 09:11:16.508664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.546 [2024-11-06 09:11:16.508864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.546 [2024-11-06 09:11:16.508887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.546 [2024-11-06 09:11:16.508903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.546 [2024-11-06 09:11:16.508911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.546 [2024-11-06 09:11:16.508923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.546 [2024-11-06 09:11:16.508931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:17.546 [2024-11-06 09:11:16.508943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.546 "name": "Existed_Raid", 00:18:17.546 "uuid": "06bd9f3b-a5b9-44b5-8a43-2cd8a1fe3eca", 00:18:17.546 "strip_size_kb": 64, 00:18:17.546 "state": "configuring", 00:18:17.546 "raid_level": "concat", 00:18:17.546 "superblock": true, 00:18:17.546 "num_base_bdevs": 4, 00:18:17.546 "num_base_bdevs_discovered": 0, 00:18:17.546 "num_base_bdevs_operational": 4, 00:18:17.546 "base_bdevs_list": [ 00:18:17.546 { 00:18:17.546 "name": "BaseBdev1", 00:18:17.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.546 "is_configured": false, 00:18:17.546 "data_offset": 0, 00:18:17.546 "data_size": 0 00:18:17.546 }, 00:18:17.546 { 00:18:17.546 "name": "BaseBdev2", 00:18:17.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.546 "is_configured": false, 00:18:17.546 "data_offset": 0, 00:18:17.546 "data_size": 0 00:18:17.546 }, 00:18:17.546 { 00:18:17.546 "name": "BaseBdev3", 00:18:17.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.546 "is_configured": false, 00:18:17.546 "data_offset": 0, 00:18:17.546 "data_size": 0 00:18:17.546 }, 00:18:17.546 { 00:18:17.546 "name": "BaseBdev4", 00:18:17.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.546 "is_configured": false, 00:18:17.546 "data_offset": 0, 00:18:17.546 "data_size": 0 00:18:17.546 } 00:18:17.546 ] 00:18:17.546 }' 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.546 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 [2024-11-06 09:11:16.940211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.112 [2024-11-06 09:11:16.940255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 [2024-11-06 09:11:16.952200] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.112 [2024-11-06 09:11:16.952248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.112 [2024-11-06 09:11:16.952259] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.112 [2024-11-06 09:11:16.952285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.112 [2024-11-06 09:11:16.952294] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.112 [2024-11-06 09:11:16.952306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.112 [2024-11-06 09:11:16.952314] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.112 [2024-11-06 09:11:16.952326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 09:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 [2024-11-06 09:11:17.002166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.112 BaseBdev1 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 [ 00:18:18.112 { 00:18:18.112 "name": "BaseBdev1", 00:18:18.112 "aliases": [ 00:18:18.112 "cc06b60d-5569-4f03-95e3-38f05916a0f1" 00:18:18.112 ], 00:18:18.112 "product_name": "Malloc disk", 00:18:18.112 "block_size": 512, 00:18:18.112 "num_blocks": 65536, 00:18:18.112 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:18.112 "assigned_rate_limits": { 00:18:18.112 "rw_ios_per_sec": 0, 00:18:18.112 "rw_mbytes_per_sec": 0, 00:18:18.112 "r_mbytes_per_sec": 0, 00:18:18.112 "w_mbytes_per_sec": 0 00:18:18.112 }, 00:18:18.112 "claimed": true, 00:18:18.112 "claim_type": "exclusive_write", 00:18:18.112 "zoned": false, 00:18:18.112 "supported_io_types": { 00:18:18.112 "read": true, 00:18:18.112 "write": true, 00:18:18.112 "unmap": true, 00:18:18.112 "flush": true, 00:18:18.112 "reset": true, 00:18:18.113 "nvme_admin": false, 00:18:18.113 "nvme_io": false, 00:18:18.113 "nvme_io_md": false, 00:18:18.113 "write_zeroes": true, 00:18:18.113 "zcopy": true, 00:18:18.113 "get_zone_info": false, 00:18:18.113 "zone_management": false, 00:18:18.113 "zone_append": false, 00:18:18.113 "compare": false, 00:18:18.113 "compare_and_write": false, 00:18:18.113 "abort": true, 00:18:18.113 "seek_hole": false, 00:18:18.113 "seek_data": false, 00:18:18.113 "copy": true, 00:18:18.113 "nvme_iov_md": false 00:18:18.113 }, 00:18:18.113 "memory_domains": [ 00:18:18.113 { 00:18:18.113 "dma_device_id": "system", 00:18:18.113 "dma_device_type": 1 00:18:18.113 }, 00:18:18.113 { 00:18:18.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.113 "dma_device_type": 2 00:18:18.113 } 00:18:18.113 ], 00:18:18.113 "driver_specific": {} 00:18:18.113 } 00:18:18.113 ] 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.113 "name": "Existed_Raid", 00:18:18.113 "uuid": "d493c6cb-867c-4d8f-8749-29ddb4fd0368", 00:18:18.113 "strip_size_kb": 64, 00:18:18.113 "state": "configuring", 00:18:18.113 "raid_level": "concat", 00:18:18.113 "superblock": true, 00:18:18.113 "num_base_bdevs": 4, 00:18:18.113 "num_base_bdevs_discovered": 1, 00:18:18.113 "num_base_bdevs_operational": 4, 00:18:18.113 "base_bdevs_list": [ 00:18:18.113 { 00:18:18.113 "name": "BaseBdev1", 00:18:18.113 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:18.113 "is_configured": true, 00:18:18.113 "data_offset": 2048, 00:18:18.113 "data_size": 63488 00:18:18.113 }, 00:18:18.113 { 00:18:18.113 "name": "BaseBdev2", 00:18:18.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.113 "is_configured": false, 00:18:18.113 "data_offset": 0, 00:18:18.113 "data_size": 0 00:18:18.113 }, 00:18:18.113 { 00:18:18.113 "name": "BaseBdev3", 00:18:18.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.113 "is_configured": false, 00:18:18.113 "data_offset": 0, 00:18:18.113 "data_size": 0 00:18:18.113 }, 00:18:18.113 { 00:18:18.113 "name": "BaseBdev4", 00:18:18.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.113 "is_configured": false, 00:18:18.113 "data_offset": 0, 00:18:18.113 "data_size": 0 00:18:18.113 } 00:18:18.113 ] 00:18:18.113 }' 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.113 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.680 [2024-11-06 09:11:17.481666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.680 [2024-11-06 09:11:17.481726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.680 [2024-11-06 09:11:17.493734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.680 [2024-11-06 09:11:17.495960] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.680 [2024-11-06 09:11:17.496011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.680 [2024-11-06 09:11:17.496023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.680 [2024-11-06 09:11:17.496038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.680 [2024-11-06 09:11:17.496046] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.680 [2024-11-06 09:11:17.496057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.680 "name": "Existed_Raid", 00:18:18.680 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:18.680 "strip_size_kb": 64, 00:18:18.680 "state": "configuring", 00:18:18.680 "raid_level": "concat", 00:18:18.680 "superblock": true, 00:18:18.680 "num_base_bdevs": 4, 00:18:18.680 "num_base_bdevs_discovered": 1, 00:18:18.680 "num_base_bdevs_operational": 4, 00:18:18.680 "base_bdevs_list": [ 00:18:18.680 { 00:18:18.680 "name": "BaseBdev1", 00:18:18.680 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:18.680 "is_configured": true, 00:18:18.680 "data_offset": 2048, 00:18:18.680 "data_size": 63488 00:18:18.680 }, 00:18:18.680 { 00:18:18.680 "name": "BaseBdev2", 00:18:18.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.680 "is_configured": false, 00:18:18.680 "data_offset": 0, 00:18:18.680 "data_size": 0 00:18:18.680 }, 00:18:18.680 { 00:18:18.680 "name": "BaseBdev3", 00:18:18.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.680 "is_configured": false, 00:18:18.680 "data_offset": 0, 00:18:18.680 "data_size": 0 00:18:18.680 }, 00:18:18.680 { 00:18:18.680 "name": "BaseBdev4", 00:18:18.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.680 "is_configured": false, 00:18:18.680 "data_offset": 0, 00:18:18.680 "data_size": 0 00:18:18.680 } 00:18:18.680 ] 00:18:18.680 }' 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.680 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.939 [2024-11-06 09:11:17.963318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.939 BaseBdev2 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:18.939 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.940 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.940 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.940 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.940 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.940 09:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.199 [ 00:18:19.199 { 00:18:19.199 "name": "BaseBdev2", 00:18:19.199 "aliases": [ 00:18:19.199 "677f47e9-de34-4568-898a-3e6649641015" 00:18:19.199 ], 00:18:19.199 "product_name": "Malloc disk", 00:18:19.199 "block_size": 512, 00:18:19.199 "num_blocks": 65536, 00:18:19.199 "uuid": "677f47e9-de34-4568-898a-3e6649641015", 00:18:19.199 "assigned_rate_limits": { 00:18:19.199 "rw_ios_per_sec": 0, 00:18:19.199 "rw_mbytes_per_sec": 0, 00:18:19.199 "r_mbytes_per_sec": 0, 00:18:19.199 "w_mbytes_per_sec": 0 00:18:19.199 }, 00:18:19.199 "claimed": true, 00:18:19.199 "claim_type": "exclusive_write", 00:18:19.199 "zoned": false, 00:18:19.199 "supported_io_types": { 00:18:19.199 "read": true, 00:18:19.199 "write": true, 00:18:19.199 "unmap": true, 00:18:19.199 "flush": true, 00:18:19.199 "reset": true, 00:18:19.199 "nvme_admin": false, 00:18:19.199 "nvme_io": false, 00:18:19.199 "nvme_io_md": false, 00:18:19.199 "write_zeroes": true, 00:18:19.199 "zcopy": true, 00:18:19.199 "get_zone_info": false, 00:18:19.199 "zone_management": false, 00:18:19.199 "zone_append": false, 00:18:19.199 "compare": false, 00:18:19.199 "compare_and_write": false, 00:18:19.199 "abort": true, 00:18:19.199 "seek_hole": false, 00:18:19.199 "seek_data": false, 00:18:19.199 "copy": true, 00:18:19.199 "nvme_iov_md": false 00:18:19.199 }, 00:18:19.199 "memory_domains": [ 00:18:19.199 { 00:18:19.199 "dma_device_id": "system", 00:18:19.199 "dma_device_type": 1 00:18:19.199 }, 00:18:19.199 { 00:18:19.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.199 "dma_device_type": 2 00:18:19.199 } 00:18:19.199 ], 00:18:19.199 "driver_specific": {} 00:18:19.199 } 00:18:19.199 ] 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.199 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.199 "name": "Existed_Raid", 00:18:19.199 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:19.199 "strip_size_kb": 64, 00:18:19.200 "state": "configuring", 00:18:19.200 "raid_level": "concat", 00:18:19.200 "superblock": true, 00:18:19.200 "num_base_bdevs": 4, 00:18:19.200 "num_base_bdevs_discovered": 2, 00:18:19.200 "num_base_bdevs_operational": 4, 00:18:19.200 "base_bdevs_list": [ 00:18:19.200 { 00:18:19.200 "name": "BaseBdev1", 00:18:19.200 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:19.200 "is_configured": true, 00:18:19.200 "data_offset": 2048, 00:18:19.200 "data_size": 63488 00:18:19.200 }, 00:18:19.200 { 00:18:19.200 "name": "BaseBdev2", 00:18:19.200 "uuid": "677f47e9-de34-4568-898a-3e6649641015", 00:18:19.200 "is_configured": true, 00:18:19.200 "data_offset": 2048, 00:18:19.200 "data_size": 63488 00:18:19.200 }, 00:18:19.200 { 00:18:19.200 "name": "BaseBdev3", 00:18:19.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.200 "is_configured": false, 00:18:19.200 "data_offset": 0, 00:18:19.200 "data_size": 0 00:18:19.200 }, 00:18:19.200 { 00:18:19.200 "name": "BaseBdev4", 00:18:19.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.200 "is_configured": false, 00:18:19.200 "data_offset": 0, 00:18:19.200 "data_size": 0 00:18:19.200 } 00:18:19.200 ] 00:18:19.200 }' 00:18:19.200 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.200 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.459 [2024-11-06 09:11:18.452677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:19.459 BaseBdev3 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.459 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.459 [ 00:18:19.459 { 00:18:19.459 "name": "BaseBdev3", 00:18:19.459 "aliases": [ 00:18:19.459 "455f5caf-3aef-492b-b5a6-043efb7eef98" 00:18:19.459 ], 00:18:19.459 "product_name": "Malloc disk", 00:18:19.459 "block_size": 512, 00:18:19.459 "num_blocks": 65536, 00:18:19.459 "uuid": "455f5caf-3aef-492b-b5a6-043efb7eef98", 00:18:19.459 "assigned_rate_limits": { 00:18:19.459 "rw_ios_per_sec": 0, 00:18:19.459 "rw_mbytes_per_sec": 0, 00:18:19.459 "r_mbytes_per_sec": 0, 00:18:19.459 "w_mbytes_per_sec": 0 00:18:19.459 }, 00:18:19.459 "claimed": true, 00:18:19.459 "claim_type": "exclusive_write", 00:18:19.459 "zoned": false, 00:18:19.459 "supported_io_types": { 00:18:19.459 "read": true, 00:18:19.459 "write": true, 00:18:19.459 "unmap": true, 00:18:19.459 "flush": true, 00:18:19.459 "reset": true, 00:18:19.459 "nvme_admin": false, 00:18:19.459 "nvme_io": false, 00:18:19.459 "nvme_io_md": false, 00:18:19.459 "write_zeroes": true, 00:18:19.459 "zcopy": true, 00:18:19.459 "get_zone_info": false, 00:18:19.459 "zone_management": false, 00:18:19.459 "zone_append": false, 00:18:19.459 "compare": false, 00:18:19.459 "compare_and_write": false, 00:18:19.459 "abort": true, 00:18:19.459 "seek_hole": false, 00:18:19.459 "seek_data": false, 00:18:19.459 "copy": true, 00:18:19.459 "nvme_iov_md": false 00:18:19.459 }, 00:18:19.459 "memory_domains": [ 00:18:19.459 { 00:18:19.459 "dma_device_id": "system", 00:18:19.459 "dma_device_type": 1 00:18:19.459 }, 00:18:19.459 { 00:18:19.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.459 "dma_device_type": 2 00:18:19.459 } 00:18:19.460 ], 00:18:19.718 "driver_specific": {} 00:18:19.718 } 00:18:19.718 ] 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.718 "name": "Existed_Raid", 00:18:19.718 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:19.718 "strip_size_kb": 64, 00:18:19.718 "state": "configuring", 00:18:19.718 "raid_level": "concat", 00:18:19.718 "superblock": true, 00:18:19.718 "num_base_bdevs": 4, 00:18:19.718 "num_base_bdevs_discovered": 3, 00:18:19.718 "num_base_bdevs_operational": 4, 00:18:19.718 "base_bdevs_list": [ 00:18:19.718 { 00:18:19.718 "name": "BaseBdev1", 00:18:19.718 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:19.718 "is_configured": true, 00:18:19.718 "data_offset": 2048, 00:18:19.718 "data_size": 63488 00:18:19.718 }, 00:18:19.718 { 00:18:19.718 "name": "BaseBdev2", 00:18:19.718 "uuid": "677f47e9-de34-4568-898a-3e6649641015", 00:18:19.718 "is_configured": true, 00:18:19.718 "data_offset": 2048, 00:18:19.718 "data_size": 63488 00:18:19.718 }, 00:18:19.718 { 00:18:19.718 "name": "BaseBdev3", 00:18:19.718 "uuid": "455f5caf-3aef-492b-b5a6-043efb7eef98", 00:18:19.718 "is_configured": true, 00:18:19.718 "data_offset": 2048, 00:18:19.718 "data_size": 63488 00:18:19.718 }, 00:18:19.718 { 00:18:19.718 "name": "BaseBdev4", 00:18:19.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.718 "is_configured": false, 00:18:19.718 "data_offset": 0, 00:18:19.718 "data_size": 0 00:18:19.718 } 00:18:19.718 ] 00:18:19.718 }' 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.718 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 [2024-11-06 09:11:18.951837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:19.977 [2024-11-06 09:11:18.952097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:19.977 [2024-11-06 09:11:18.952114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:19.977 [2024-11-06 09:11:18.952427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:19.977 [2024-11-06 09:11:18.952583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:19.977 [2024-11-06 09:11:18.952599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:19.977 [2024-11-06 09:11:18.952731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.977 BaseBdev4 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 [ 00:18:19.977 { 00:18:19.977 "name": "BaseBdev4", 00:18:19.977 "aliases": [ 00:18:19.977 "1eab24e9-ccbb-4eac-9b81-02666b8a5b05" 00:18:19.977 ], 00:18:19.977 "product_name": "Malloc disk", 00:18:19.977 "block_size": 512, 00:18:19.977 "num_blocks": 65536, 00:18:19.977 "uuid": "1eab24e9-ccbb-4eac-9b81-02666b8a5b05", 00:18:19.977 "assigned_rate_limits": { 00:18:19.977 "rw_ios_per_sec": 0, 00:18:19.977 "rw_mbytes_per_sec": 0, 00:18:19.977 "r_mbytes_per_sec": 0, 00:18:19.977 "w_mbytes_per_sec": 0 00:18:19.977 }, 00:18:19.977 "claimed": true, 00:18:19.977 "claim_type": "exclusive_write", 00:18:19.977 "zoned": false, 00:18:19.977 "supported_io_types": { 00:18:19.977 "read": true, 00:18:19.977 "write": true, 00:18:19.977 "unmap": true, 00:18:19.977 "flush": true, 00:18:19.977 "reset": true, 00:18:19.977 "nvme_admin": false, 00:18:19.977 "nvme_io": false, 00:18:19.977 "nvme_io_md": false, 00:18:19.977 "write_zeroes": true, 00:18:19.977 "zcopy": true, 00:18:19.977 "get_zone_info": false, 00:18:19.977 "zone_management": false, 00:18:19.977 "zone_append": false, 00:18:19.977 "compare": false, 00:18:19.977 "compare_and_write": false, 00:18:19.977 "abort": true, 00:18:19.977 "seek_hole": false, 00:18:19.977 "seek_data": false, 00:18:19.977 "copy": true, 00:18:19.977 "nvme_iov_md": false 00:18:19.977 }, 00:18:19.977 "memory_domains": [ 00:18:19.977 { 00:18:19.977 "dma_device_id": "system", 00:18:19.977 "dma_device_type": 1 00:18:19.977 }, 00:18:19.977 { 00:18:19.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.977 "dma_device_type": 2 00:18:19.977 } 00:18:19.977 ], 00:18:19.977 "driver_specific": {} 00:18:19.977 } 00:18:19.977 ] 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:19.977 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.978 09:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.978 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.978 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.978 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.978 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.237 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.237 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.237 "name": "Existed_Raid", 00:18:20.237 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:20.237 "strip_size_kb": 64, 00:18:20.237 "state": "online", 00:18:20.237 "raid_level": "concat", 00:18:20.237 "superblock": true, 00:18:20.237 "num_base_bdevs": 4, 00:18:20.237 "num_base_bdevs_discovered": 4, 00:18:20.237 "num_base_bdevs_operational": 4, 00:18:20.237 "base_bdevs_list": [ 00:18:20.237 { 00:18:20.237 "name": "BaseBdev1", 00:18:20.237 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:20.237 "is_configured": true, 00:18:20.237 "data_offset": 2048, 00:18:20.237 "data_size": 63488 00:18:20.237 }, 00:18:20.237 { 00:18:20.237 "name": "BaseBdev2", 00:18:20.237 "uuid": "677f47e9-de34-4568-898a-3e6649641015", 00:18:20.237 "is_configured": true, 00:18:20.237 "data_offset": 2048, 00:18:20.237 "data_size": 63488 00:18:20.237 }, 00:18:20.237 { 00:18:20.237 "name": "BaseBdev3", 00:18:20.237 "uuid": "455f5caf-3aef-492b-b5a6-043efb7eef98", 00:18:20.237 "is_configured": true, 00:18:20.237 "data_offset": 2048, 00:18:20.237 "data_size": 63488 00:18:20.237 }, 00:18:20.237 { 00:18:20.237 "name": "BaseBdev4", 00:18:20.237 "uuid": "1eab24e9-ccbb-4eac-9b81-02666b8a5b05", 00:18:20.237 "is_configured": true, 00:18:20.237 "data_offset": 2048, 00:18:20.237 "data_size": 63488 00:18:20.237 } 00:18:20.237 ] 00:18:20.237 }' 00:18:20.237 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.237 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.496 [2024-11-06 09:11:19.387638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.496 "name": "Existed_Raid", 00:18:20.496 "aliases": [ 00:18:20.496 "924a928f-7c9d-4fd6-9c89-7db4400e5168" 00:18:20.496 ], 00:18:20.496 "product_name": "Raid Volume", 00:18:20.496 "block_size": 512, 00:18:20.496 "num_blocks": 253952, 00:18:20.496 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:20.496 "assigned_rate_limits": { 00:18:20.496 "rw_ios_per_sec": 0, 00:18:20.496 "rw_mbytes_per_sec": 0, 00:18:20.496 "r_mbytes_per_sec": 0, 00:18:20.496 "w_mbytes_per_sec": 0 00:18:20.496 }, 00:18:20.496 "claimed": false, 00:18:20.496 "zoned": false, 00:18:20.496 "supported_io_types": { 00:18:20.496 "read": true, 00:18:20.496 "write": true, 00:18:20.496 "unmap": true, 00:18:20.496 "flush": true, 00:18:20.496 "reset": true, 00:18:20.496 "nvme_admin": false, 00:18:20.496 "nvme_io": false, 00:18:20.496 "nvme_io_md": false, 00:18:20.496 "write_zeroes": true, 00:18:20.496 "zcopy": false, 00:18:20.496 "get_zone_info": false, 00:18:20.496 "zone_management": false, 00:18:20.496 "zone_append": false, 00:18:20.496 "compare": false, 00:18:20.496 "compare_and_write": false, 00:18:20.496 "abort": false, 00:18:20.496 "seek_hole": false, 00:18:20.496 "seek_data": false, 00:18:20.496 "copy": false, 00:18:20.496 "nvme_iov_md": false 00:18:20.496 }, 00:18:20.496 "memory_domains": [ 00:18:20.496 { 00:18:20.496 "dma_device_id": "system", 00:18:20.496 "dma_device_type": 1 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.496 "dma_device_type": 2 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "system", 00:18:20.496 "dma_device_type": 1 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.496 "dma_device_type": 2 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "system", 00:18:20.496 "dma_device_type": 1 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.496 "dma_device_type": 2 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "system", 00:18:20.496 "dma_device_type": 1 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.496 "dma_device_type": 2 00:18:20.496 } 00:18:20.496 ], 00:18:20.496 "driver_specific": { 00:18:20.496 "raid": { 00:18:20.496 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:20.496 "strip_size_kb": 64, 00:18:20.496 "state": "online", 00:18:20.496 "raid_level": "concat", 00:18:20.496 "superblock": true, 00:18:20.496 "num_base_bdevs": 4, 00:18:20.496 "num_base_bdevs_discovered": 4, 00:18:20.496 "num_base_bdevs_operational": 4, 00:18:20.496 "base_bdevs_list": [ 00:18:20.496 { 00:18:20.496 "name": "BaseBdev1", 00:18:20.496 "uuid": "cc06b60d-5569-4f03-95e3-38f05916a0f1", 00:18:20.496 "is_configured": true, 00:18:20.496 "data_offset": 2048, 00:18:20.496 "data_size": 63488 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "name": "BaseBdev2", 00:18:20.496 "uuid": "677f47e9-de34-4568-898a-3e6649641015", 00:18:20.496 "is_configured": true, 00:18:20.496 "data_offset": 2048, 00:18:20.496 "data_size": 63488 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "name": "BaseBdev3", 00:18:20.496 "uuid": "455f5caf-3aef-492b-b5a6-043efb7eef98", 00:18:20.496 "is_configured": true, 00:18:20.496 "data_offset": 2048, 00:18:20.496 "data_size": 63488 00:18:20.496 }, 00:18:20.496 { 00:18:20.496 "name": "BaseBdev4", 00:18:20.496 "uuid": "1eab24e9-ccbb-4eac-9b81-02666b8a5b05", 00:18:20.496 "is_configured": true, 00:18:20.496 "data_offset": 2048, 00:18:20.496 "data_size": 63488 00:18:20.496 } 00:18:20.496 ] 00:18:20.496 } 00:18:20.496 } 00:18:20.496 }' 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:20.496 BaseBdev2 00:18:20.496 BaseBdev3 00:18:20.496 BaseBdev4' 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.496 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.765 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.765 [2024-11-06 09:11:19.722880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:20.765 [2024-11-06 09:11:19.723052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.765 [2024-11-06 09:11:19.723131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:21.024 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.025 "name": "Existed_Raid", 00:18:21.025 "uuid": "924a928f-7c9d-4fd6-9c89-7db4400e5168", 00:18:21.025 "strip_size_kb": 64, 00:18:21.025 "state": "offline", 00:18:21.025 "raid_level": "concat", 00:18:21.025 "superblock": true, 00:18:21.025 "num_base_bdevs": 4, 00:18:21.025 "num_base_bdevs_discovered": 3, 00:18:21.025 "num_base_bdevs_operational": 3, 00:18:21.025 "base_bdevs_list": [ 00:18:21.025 { 00:18:21.025 "name": null, 00:18:21.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.025 "is_configured": false, 00:18:21.025 "data_offset": 0, 00:18:21.025 "data_size": 63488 00:18:21.025 }, 00:18:21.025 { 00:18:21.025 "name": "BaseBdev2", 00:18:21.025 "uuid": "677f47e9-de34-4568-898a-3e6649641015", 00:18:21.025 "is_configured": true, 00:18:21.025 "data_offset": 2048, 00:18:21.025 "data_size": 63488 00:18:21.025 }, 00:18:21.025 { 00:18:21.025 "name": "BaseBdev3", 00:18:21.025 "uuid": "455f5caf-3aef-492b-b5a6-043efb7eef98", 00:18:21.025 "is_configured": true, 00:18:21.025 "data_offset": 2048, 00:18:21.025 "data_size": 63488 00:18:21.025 }, 00:18:21.025 { 00:18:21.025 "name": "BaseBdev4", 00:18:21.025 "uuid": "1eab24e9-ccbb-4eac-9b81-02666b8a5b05", 00:18:21.025 "is_configured": true, 00:18:21.025 "data_offset": 2048, 00:18:21.025 "data_size": 63488 00:18:21.025 } 00:18:21.025 ] 00:18:21.025 }' 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.025 09:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.285 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.285 [2024-11-06 09:11:20.303362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.547 [2024-11-06 09:11:20.457038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.547 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.548 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:21.548 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.548 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 [2024-11-06 09:11:20.606826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:21.813 [2024-11-06 09:11:20.606882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 BaseBdev2 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.813 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.813 [ 00:18:21.813 { 00:18:21.813 "name": "BaseBdev2", 00:18:21.813 "aliases": [ 00:18:21.813 "65a8fce7-ebd3-4ecb-afc9-1061938b9687" 00:18:21.813 ], 00:18:21.813 "product_name": "Malloc disk", 00:18:21.813 "block_size": 512, 00:18:21.813 "num_blocks": 65536, 00:18:21.813 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:21.813 "assigned_rate_limits": { 00:18:21.813 "rw_ios_per_sec": 0, 00:18:21.813 "rw_mbytes_per_sec": 0, 00:18:21.813 "r_mbytes_per_sec": 0, 00:18:21.813 "w_mbytes_per_sec": 0 00:18:21.813 }, 00:18:21.813 "claimed": false, 00:18:21.813 "zoned": false, 00:18:21.813 "supported_io_types": { 00:18:21.813 "read": true, 00:18:21.813 "write": true, 00:18:21.813 "unmap": true, 00:18:21.813 "flush": true, 00:18:21.813 "reset": true, 00:18:21.813 "nvme_admin": false, 00:18:21.813 "nvme_io": false, 00:18:21.813 "nvme_io_md": false, 00:18:21.813 "write_zeroes": true, 00:18:21.813 "zcopy": true, 00:18:21.813 "get_zone_info": false, 00:18:21.813 "zone_management": false, 00:18:21.813 "zone_append": false, 00:18:21.813 "compare": false, 00:18:21.813 "compare_and_write": false, 00:18:21.813 "abort": true, 00:18:21.813 "seek_hole": false, 00:18:21.813 "seek_data": false, 00:18:21.813 "copy": true, 00:18:21.813 "nvme_iov_md": false 00:18:21.813 }, 00:18:21.813 "memory_domains": [ 00:18:21.814 { 00:18:21.814 "dma_device_id": "system", 00:18:21.814 "dma_device_type": 1 00:18:21.814 }, 00:18:21.814 { 00:18:21.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.814 "dma_device_type": 2 00:18:21.814 } 00:18:21.814 ], 00:18:21.814 "driver_specific": {} 00:18:21.814 } 00:18:21.814 ] 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.814 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.104 BaseBdev3 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.104 [ 00:18:22.104 { 00:18:22.104 "name": "BaseBdev3", 00:18:22.104 "aliases": [ 00:18:22.104 "15223e34-2afe-4d9c-9257-191c65db7074" 00:18:22.104 ], 00:18:22.104 "product_name": "Malloc disk", 00:18:22.104 "block_size": 512, 00:18:22.104 "num_blocks": 65536, 00:18:22.104 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:22.104 "assigned_rate_limits": { 00:18:22.104 "rw_ios_per_sec": 0, 00:18:22.104 "rw_mbytes_per_sec": 0, 00:18:22.104 "r_mbytes_per_sec": 0, 00:18:22.104 "w_mbytes_per_sec": 0 00:18:22.104 }, 00:18:22.104 "claimed": false, 00:18:22.104 "zoned": false, 00:18:22.104 "supported_io_types": { 00:18:22.104 "read": true, 00:18:22.104 "write": true, 00:18:22.104 "unmap": true, 00:18:22.104 "flush": true, 00:18:22.104 "reset": true, 00:18:22.104 "nvme_admin": false, 00:18:22.104 "nvme_io": false, 00:18:22.104 "nvme_io_md": false, 00:18:22.104 "write_zeroes": true, 00:18:22.104 "zcopy": true, 00:18:22.104 "get_zone_info": false, 00:18:22.104 "zone_management": false, 00:18:22.104 "zone_append": false, 00:18:22.104 "compare": false, 00:18:22.104 "compare_and_write": false, 00:18:22.104 "abort": true, 00:18:22.104 "seek_hole": false, 00:18:22.104 "seek_data": false, 00:18:22.104 "copy": true, 00:18:22.104 "nvme_iov_md": false 00:18:22.104 }, 00:18:22.104 "memory_domains": [ 00:18:22.104 { 00:18:22.104 "dma_device_id": "system", 00:18:22.104 "dma_device_type": 1 00:18:22.104 }, 00:18:22.104 { 00:18:22.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.104 "dma_device_type": 2 00:18:22.104 } 00:18:22.104 ], 00:18:22.104 "driver_specific": {} 00:18:22.104 } 00:18:22.104 ] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.104 BaseBdev4 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:22.104 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:22.105 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.105 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.105 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.105 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:22.105 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.105 09:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.105 [ 00:18:22.105 { 00:18:22.105 "name": "BaseBdev4", 00:18:22.105 "aliases": [ 00:18:22.105 "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb" 00:18:22.105 ], 00:18:22.105 "product_name": "Malloc disk", 00:18:22.105 "block_size": 512, 00:18:22.105 "num_blocks": 65536, 00:18:22.105 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:22.105 "assigned_rate_limits": { 00:18:22.105 "rw_ios_per_sec": 0, 00:18:22.105 "rw_mbytes_per_sec": 0, 00:18:22.105 "r_mbytes_per_sec": 0, 00:18:22.105 "w_mbytes_per_sec": 0 00:18:22.105 }, 00:18:22.105 "claimed": false, 00:18:22.105 "zoned": false, 00:18:22.105 "supported_io_types": { 00:18:22.105 "read": true, 00:18:22.105 "write": true, 00:18:22.105 "unmap": true, 00:18:22.105 "flush": true, 00:18:22.105 "reset": true, 00:18:22.105 "nvme_admin": false, 00:18:22.105 "nvme_io": false, 00:18:22.105 "nvme_io_md": false, 00:18:22.105 "write_zeroes": true, 00:18:22.105 "zcopy": true, 00:18:22.105 "get_zone_info": false, 00:18:22.105 "zone_management": false, 00:18:22.105 "zone_append": false, 00:18:22.105 "compare": false, 00:18:22.105 "compare_and_write": false, 00:18:22.105 "abort": true, 00:18:22.105 "seek_hole": false, 00:18:22.105 "seek_data": false, 00:18:22.105 "copy": true, 00:18:22.105 "nvme_iov_md": false 00:18:22.105 }, 00:18:22.105 "memory_domains": [ 00:18:22.105 { 00:18:22.105 "dma_device_id": "system", 00:18:22.105 "dma_device_type": 1 00:18:22.105 }, 00:18:22.105 { 00:18:22.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.105 "dma_device_type": 2 00:18:22.105 } 00:18:22.105 ], 00:18:22.105 "driver_specific": {} 00:18:22.105 } 00:18:22.105 ] 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.105 [2024-11-06 09:11:21.039694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.105 [2024-11-06 09:11:21.039744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.105 [2024-11-06 09:11:21.039768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.105 [2024-11-06 09:11:21.041881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.105 [2024-11-06 09:11:21.041932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.105 "name": "Existed_Raid", 00:18:22.105 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:22.105 "strip_size_kb": 64, 00:18:22.105 "state": "configuring", 00:18:22.105 "raid_level": "concat", 00:18:22.105 "superblock": true, 00:18:22.105 "num_base_bdevs": 4, 00:18:22.105 "num_base_bdevs_discovered": 3, 00:18:22.105 "num_base_bdevs_operational": 4, 00:18:22.105 "base_bdevs_list": [ 00:18:22.105 { 00:18:22.105 "name": "BaseBdev1", 00:18:22.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.105 "is_configured": false, 00:18:22.105 "data_offset": 0, 00:18:22.105 "data_size": 0 00:18:22.105 }, 00:18:22.105 { 00:18:22.105 "name": "BaseBdev2", 00:18:22.105 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:22.105 "is_configured": true, 00:18:22.105 "data_offset": 2048, 00:18:22.105 "data_size": 63488 00:18:22.105 }, 00:18:22.105 { 00:18:22.105 "name": "BaseBdev3", 00:18:22.105 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:22.105 "is_configured": true, 00:18:22.105 "data_offset": 2048, 00:18:22.105 "data_size": 63488 00:18:22.105 }, 00:18:22.105 { 00:18:22.105 "name": "BaseBdev4", 00:18:22.105 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:22.105 "is_configured": true, 00:18:22.105 "data_offset": 2048, 00:18:22.105 "data_size": 63488 00:18:22.105 } 00:18:22.105 ] 00:18:22.105 }' 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.105 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.671 [2024-11-06 09:11:21.431181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.671 "name": "Existed_Raid", 00:18:22.671 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:22.671 "strip_size_kb": 64, 00:18:22.671 "state": "configuring", 00:18:22.671 "raid_level": "concat", 00:18:22.671 "superblock": true, 00:18:22.671 "num_base_bdevs": 4, 00:18:22.671 "num_base_bdevs_discovered": 2, 00:18:22.671 "num_base_bdevs_operational": 4, 00:18:22.671 "base_bdevs_list": [ 00:18:22.671 { 00:18:22.671 "name": "BaseBdev1", 00:18:22.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.671 "is_configured": false, 00:18:22.671 "data_offset": 0, 00:18:22.671 "data_size": 0 00:18:22.671 }, 00:18:22.671 { 00:18:22.671 "name": null, 00:18:22.671 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:22.671 "is_configured": false, 00:18:22.671 "data_offset": 0, 00:18:22.671 "data_size": 63488 00:18:22.671 }, 00:18:22.671 { 00:18:22.671 "name": "BaseBdev3", 00:18:22.671 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:22.671 "is_configured": true, 00:18:22.671 "data_offset": 2048, 00:18:22.671 "data_size": 63488 00:18:22.671 }, 00:18:22.671 { 00:18:22.671 "name": "BaseBdev4", 00:18:22.671 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:22.671 "is_configured": true, 00:18:22.671 "data_offset": 2048, 00:18:22.671 "data_size": 63488 00:18:22.671 } 00:18:22.671 ] 00:18:22.671 }' 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.671 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.930 [2024-11-06 09:11:21.936038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.930 BaseBdev1 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.930 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.930 [ 00:18:22.930 { 00:18:22.930 "name": "BaseBdev1", 00:18:22.930 "aliases": [ 00:18:22.930 "7dca051b-614f-4725-8300-a0d55289d0a8" 00:18:22.930 ], 00:18:22.930 "product_name": "Malloc disk", 00:18:22.930 "block_size": 512, 00:18:22.930 "num_blocks": 65536, 00:18:22.930 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:23.188 "assigned_rate_limits": { 00:18:23.188 "rw_ios_per_sec": 0, 00:18:23.188 "rw_mbytes_per_sec": 0, 00:18:23.188 "r_mbytes_per_sec": 0, 00:18:23.188 "w_mbytes_per_sec": 0 00:18:23.188 }, 00:18:23.188 "claimed": true, 00:18:23.188 "claim_type": "exclusive_write", 00:18:23.188 "zoned": false, 00:18:23.188 "supported_io_types": { 00:18:23.188 "read": true, 00:18:23.188 "write": true, 00:18:23.188 "unmap": true, 00:18:23.188 "flush": true, 00:18:23.188 "reset": true, 00:18:23.188 "nvme_admin": false, 00:18:23.188 "nvme_io": false, 00:18:23.188 "nvme_io_md": false, 00:18:23.188 "write_zeroes": true, 00:18:23.188 "zcopy": true, 00:18:23.188 "get_zone_info": false, 00:18:23.188 "zone_management": false, 00:18:23.188 "zone_append": false, 00:18:23.188 "compare": false, 00:18:23.188 "compare_and_write": false, 00:18:23.188 "abort": true, 00:18:23.188 "seek_hole": false, 00:18:23.188 "seek_data": false, 00:18:23.188 "copy": true, 00:18:23.188 "nvme_iov_md": false 00:18:23.188 }, 00:18:23.188 "memory_domains": [ 00:18:23.188 { 00:18:23.188 "dma_device_id": "system", 00:18:23.188 "dma_device_type": 1 00:18:23.188 }, 00:18:23.188 { 00:18:23.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.188 "dma_device_type": 2 00:18:23.188 } 00:18:23.188 ], 00:18:23.188 "driver_specific": {} 00:18:23.188 } 00:18:23.188 ] 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.188 09:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.188 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.188 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.188 "name": "Existed_Raid", 00:18:23.188 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:23.188 "strip_size_kb": 64, 00:18:23.188 "state": "configuring", 00:18:23.188 "raid_level": "concat", 00:18:23.188 "superblock": true, 00:18:23.188 "num_base_bdevs": 4, 00:18:23.189 "num_base_bdevs_discovered": 3, 00:18:23.189 "num_base_bdevs_operational": 4, 00:18:23.189 "base_bdevs_list": [ 00:18:23.189 { 00:18:23.189 "name": "BaseBdev1", 00:18:23.189 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:23.189 "is_configured": true, 00:18:23.189 "data_offset": 2048, 00:18:23.189 "data_size": 63488 00:18:23.189 }, 00:18:23.189 { 00:18:23.189 "name": null, 00:18:23.189 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:23.189 "is_configured": false, 00:18:23.189 "data_offset": 0, 00:18:23.189 "data_size": 63488 00:18:23.189 }, 00:18:23.189 { 00:18:23.189 "name": "BaseBdev3", 00:18:23.189 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:23.189 "is_configured": true, 00:18:23.189 "data_offset": 2048, 00:18:23.189 "data_size": 63488 00:18:23.189 }, 00:18:23.189 { 00:18:23.189 "name": "BaseBdev4", 00:18:23.189 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:23.189 "is_configured": true, 00:18:23.189 "data_offset": 2048, 00:18:23.189 "data_size": 63488 00:18:23.189 } 00:18:23.189 ] 00:18:23.189 }' 00:18:23.189 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.189 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.447 [2024-11-06 09:11:22.471396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.447 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.705 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.705 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.705 "name": "Existed_Raid", 00:18:23.705 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:23.705 "strip_size_kb": 64, 00:18:23.705 "state": "configuring", 00:18:23.705 "raid_level": "concat", 00:18:23.705 "superblock": true, 00:18:23.705 "num_base_bdevs": 4, 00:18:23.705 "num_base_bdevs_discovered": 2, 00:18:23.705 "num_base_bdevs_operational": 4, 00:18:23.705 "base_bdevs_list": [ 00:18:23.705 { 00:18:23.705 "name": "BaseBdev1", 00:18:23.705 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:23.705 "is_configured": true, 00:18:23.705 "data_offset": 2048, 00:18:23.705 "data_size": 63488 00:18:23.705 }, 00:18:23.705 { 00:18:23.705 "name": null, 00:18:23.705 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:23.705 "is_configured": false, 00:18:23.705 "data_offset": 0, 00:18:23.705 "data_size": 63488 00:18:23.705 }, 00:18:23.705 { 00:18:23.705 "name": null, 00:18:23.705 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:23.705 "is_configured": false, 00:18:23.705 "data_offset": 0, 00:18:23.705 "data_size": 63488 00:18:23.705 }, 00:18:23.705 { 00:18:23.705 "name": "BaseBdev4", 00:18:23.705 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:23.705 "is_configured": true, 00:18:23.705 "data_offset": 2048, 00:18:23.705 "data_size": 63488 00:18:23.705 } 00:18:23.705 ] 00:18:23.705 }' 00:18:23.705 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.705 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 [2024-11-06 09:11:22.946723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.964 09:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.964 "name": "Existed_Raid", 00:18:23.964 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:23.964 "strip_size_kb": 64, 00:18:23.964 "state": "configuring", 00:18:23.964 "raid_level": "concat", 00:18:23.964 "superblock": true, 00:18:23.964 "num_base_bdevs": 4, 00:18:23.964 "num_base_bdevs_discovered": 3, 00:18:23.964 "num_base_bdevs_operational": 4, 00:18:23.964 "base_bdevs_list": [ 00:18:23.964 { 00:18:23.964 "name": "BaseBdev1", 00:18:23.964 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:23.964 "is_configured": true, 00:18:23.964 "data_offset": 2048, 00:18:23.964 "data_size": 63488 00:18:23.964 }, 00:18:23.964 { 00:18:23.964 "name": null, 00:18:23.964 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:23.964 "is_configured": false, 00:18:23.964 "data_offset": 0, 00:18:23.964 "data_size": 63488 00:18:23.964 }, 00:18:23.964 { 00:18:23.964 "name": "BaseBdev3", 00:18:23.964 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:23.964 "is_configured": true, 00:18:23.964 "data_offset": 2048, 00:18:23.964 "data_size": 63488 00:18:23.964 }, 00:18:23.964 { 00:18:23.964 "name": "BaseBdev4", 00:18:23.964 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:23.964 "is_configured": true, 00:18:23.964 "data_offset": 2048, 00:18:23.964 "data_size": 63488 00:18:23.964 } 00:18:23.964 ] 00:18:23.964 }' 00:18:23.964 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.964 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.545 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.545 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:24.545 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.545 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.545 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.545 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 [2024-11-06 09:11:23.362151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.546 "name": "Existed_Raid", 00:18:24.546 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:24.546 "strip_size_kb": 64, 00:18:24.546 "state": "configuring", 00:18:24.546 "raid_level": "concat", 00:18:24.546 "superblock": true, 00:18:24.546 "num_base_bdevs": 4, 00:18:24.546 "num_base_bdevs_discovered": 2, 00:18:24.546 "num_base_bdevs_operational": 4, 00:18:24.546 "base_bdevs_list": [ 00:18:24.546 { 00:18:24.546 "name": null, 00:18:24.546 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:24.546 "is_configured": false, 00:18:24.546 "data_offset": 0, 00:18:24.546 "data_size": 63488 00:18:24.546 }, 00:18:24.546 { 00:18:24.546 "name": null, 00:18:24.546 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:24.546 "is_configured": false, 00:18:24.546 "data_offset": 0, 00:18:24.546 "data_size": 63488 00:18:24.546 }, 00:18:24.546 { 00:18:24.546 "name": "BaseBdev3", 00:18:24.546 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:24.546 "is_configured": true, 00:18:24.546 "data_offset": 2048, 00:18:24.546 "data_size": 63488 00:18:24.546 }, 00:18:24.546 { 00:18:24.546 "name": "BaseBdev4", 00:18:24.546 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:24.546 "is_configured": true, 00:18:24.546 "data_offset": 2048, 00:18:24.546 "data_size": 63488 00:18:24.546 } 00:18:24.546 ] 00:18:24.546 }' 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.546 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.113 [2024-11-06 09:11:23.931687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.113 "name": "Existed_Raid", 00:18:25.113 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:25.113 "strip_size_kb": 64, 00:18:25.113 "state": "configuring", 00:18:25.113 "raid_level": "concat", 00:18:25.113 "superblock": true, 00:18:25.113 "num_base_bdevs": 4, 00:18:25.113 "num_base_bdevs_discovered": 3, 00:18:25.113 "num_base_bdevs_operational": 4, 00:18:25.113 "base_bdevs_list": [ 00:18:25.113 { 00:18:25.113 "name": null, 00:18:25.113 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:25.113 "is_configured": false, 00:18:25.113 "data_offset": 0, 00:18:25.113 "data_size": 63488 00:18:25.113 }, 00:18:25.113 { 00:18:25.113 "name": "BaseBdev2", 00:18:25.113 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:25.113 "is_configured": true, 00:18:25.113 "data_offset": 2048, 00:18:25.113 "data_size": 63488 00:18:25.113 }, 00:18:25.113 { 00:18:25.113 "name": "BaseBdev3", 00:18:25.113 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:25.113 "is_configured": true, 00:18:25.113 "data_offset": 2048, 00:18:25.113 "data_size": 63488 00:18:25.113 }, 00:18:25.113 { 00:18:25.113 "name": "BaseBdev4", 00:18:25.113 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:25.113 "is_configured": true, 00:18:25.113 "data_offset": 2048, 00:18:25.113 "data_size": 63488 00:18:25.113 } 00:18:25.113 ] 00:18:25.113 }' 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.113 09:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:25.371 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7dca051b-614f-4725-8300-a0d55289d0a8 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 [2024-11-06 09:11:24.461118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:25.633 [2024-11-06 09:11:24.461539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:25.633 [2024-11-06 09:11:24.461561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:25.633 [2024-11-06 09:11:24.461856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:25.633 NewBaseBdev 00:18:25.633 [2024-11-06 09:11:24.462001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:25.633 [2024-11-06 09:11:24.462016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:25.633 [2024-11-06 09:11:24.462162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:25.633 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.634 [ 00:18:25.634 { 00:18:25.634 "name": "NewBaseBdev", 00:18:25.634 "aliases": [ 00:18:25.634 "7dca051b-614f-4725-8300-a0d55289d0a8" 00:18:25.634 ], 00:18:25.634 "product_name": "Malloc disk", 00:18:25.634 "block_size": 512, 00:18:25.634 "num_blocks": 65536, 00:18:25.634 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:25.634 "assigned_rate_limits": { 00:18:25.634 "rw_ios_per_sec": 0, 00:18:25.634 "rw_mbytes_per_sec": 0, 00:18:25.634 "r_mbytes_per_sec": 0, 00:18:25.634 "w_mbytes_per_sec": 0 00:18:25.634 }, 00:18:25.634 "claimed": true, 00:18:25.634 "claim_type": "exclusive_write", 00:18:25.634 "zoned": false, 00:18:25.634 "supported_io_types": { 00:18:25.634 "read": true, 00:18:25.634 "write": true, 00:18:25.634 "unmap": true, 00:18:25.634 "flush": true, 00:18:25.634 "reset": true, 00:18:25.634 "nvme_admin": false, 00:18:25.634 "nvme_io": false, 00:18:25.634 "nvme_io_md": false, 00:18:25.634 "write_zeroes": true, 00:18:25.634 "zcopy": true, 00:18:25.634 "get_zone_info": false, 00:18:25.634 "zone_management": false, 00:18:25.634 "zone_append": false, 00:18:25.634 "compare": false, 00:18:25.634 "compare_and_write": false, 00:18:25.634 "abort": true, 00:18:25.634 "seek_hole": false, 00:18:25.634 "seek_data": false, 00:18:25.634 "copy": true, 00:18:25.634 "nvme_iov_md": false 00:18:25.634 }, 00:18:25.634 "memory_domains": [ 00:18:25.634 { 00:18:25.634 "dma_device_id": "system", 00:18:25.634 "dma_device_type": 1 00:18:25.634 }, 00:18:25.634 { 00:18:25.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.634 "dma_device_type": 2 00:18:25.634 } 00:18:25.634 ], 00:18:25.634 "driver_specific": {} 00:18:25.634 } 00:18:25.634 ] 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.634 "name": "Existed_Raid", 00:18:25.634 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:25.634 "strip_size_kb": 64, 00:18:25.634 "state": "online", 00:18:25.634 "raid_level": "concat", 00:18:25.634 "superblock": true, 00:18:25.634 "num_base_bdevs": 4, 00:18:25.634 "num_base_bdevs_discovered": 4, 00:18:25.634 "num_base_bdevs_operational": 4, 00:18:25.634 "base_bdevs_list": [ 00:18:25.634 { 00:18:25.634 "name": "NewBaseBdev", 00:18:25.634 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:25.634 "is_configured": true, 00:18:25.634 "data_offset": 2048, 00:18:25.634 "data_size": 63488 00:18:25.634 }, 00:18:25.634 { 00:18:25.634 "name": "BaseBdev2", 00:18:25.634 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:25.634 "is_configured": true, 00:18:25.634 "data_offset": 2048, 00:18:25.634 "data_size": 63488 00:18:25.634 }, 00:18:25.634 { 00:18:25.634 "name": "BaseBdev3", 00:18:25.634 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:25.634 "is_configured": true, 00:18:25.634 "data_offset": 2048, 00:18:25.634 "data_size": 63488 00:18:25.634 }, 00:18:25.634 { 00:18:25.634 "name": "BaseBdev4", 00:18:25.634 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:25.634 "is_configured": true, 00:18:25.634 "data_offset": 2048, 00:18:25.634 "data_size": 63488 00:18:25.634 } 00:18:25.634 ] 00:18:25.634 }' 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.634 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.920 [2024-11-06 09:11:24.892916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.920 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.920 "name": "Existed_Raid", 00:18:25.920 "aliases": [ 00:18:25.920 "1b252075-2168-4e6e-be9a-b221fec0c913" 00:18:25.920 ], 00:18:25.920 "product_name": "Raid Volume", 00:18:25.920 "block_size": 512, 00:18:25.920 "num_blocks": 253952, 00:18:25.920 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:25.920 "assigned_rate_limits": { 00:18:25.920 "rw_ios_per_sec": 0, 00:18:25.920 "rw_mbytes_per_sec": 0, 00:18:25.920 "r_mbytes_per_sec": 0, 00:18:25.920 "w_mbytes_per_sec": 0 00:18:25.920 }, 00:18:25.920 "claimed": false, 00:18:25.920 "zoned": false, 00:18:25.920 "supported_io_types": { 00:18:25.920 "read": true, 00:18:25.920 "write": true, 00:18:25.920 "unmap": true, 00:18:25.920 "flush": true, 00:18:25.920 "reset": true, 00:18:25.920 "nvme_admin": false, 00:18:25.920 "nvme_io": false, 00:18:25.920 "nvme_io_md": false, 00:18:25.920 "write_zeroes": true, 00:18:25.920 "zcopy": false, 00:18:25.920 "get_zone_info": false, 00:18:25.920 "zone_management": false, 00:18:25.920 "zone_append": false, 00:18:25.920 "compare": false, 00:18:25.920 "compare_and_write": false, 00:18:25.920 "abort": false, 00:18:25.920 "seek_hole": false, 00:18:25.920 "seek_data": false, 00:18:25.920 "copy": false, 00:18:25.920 "nvme_iov_md": false 00:18:25.921 }, 00:18:25.921 "memory_domains": [ 00:18:25.921 { 00:18:25.921 "dma_device_id": "system", 00:18:25.921 "dma_device_type": 1 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.921 "dma_device_type": 2 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "system", 00:18:25.921 "dma_device_type": 1 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.921 "dma_device_type": 2 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "system", 00:18:25.921 "dma_device_type": 1 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.921 "dma_device_type": 2 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "system", 00:18:25.921 "dma_device_type": 1 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.921 "dma_device_type": 2 00:18:25.921 } 00:18:25.921 ], 00:18:25.921 "driver_specific": { 00:18:25.921 "raid": { 00:18:25.921 "uuid": "1b252075-2168-4e6e-be9a-b221fec0c913", 00:18:25.921 "strip_size_kb": 64, 00:18:25.921 "state": "online", 00:18:25.921 "raid_level": "concat", 00:18:25.921 "superblock": true, 00:18:25.921 "num_base_bdevs": 4, 00:18:25.921 "num_base_bdevs_discovered": 4, 00:18:25.921 "num_base_bdevs_operational": 4, 00:18:25.921 "base_bdevs_list": [ 00:18:25.921 { 00:18:25.921 "name": "NewBaseBdev", 00:18:25.921 "uuid": "7dca051b-614f-4725-8300-a0d55289d0a8", 00:18:25.921 "is_configured": true, 00:18:25.921 "data_offset": 2048, 00:18:25.921 "data_size": 63488 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "name": "BaseBdev2", 00:18:25.921 "uuid": "65a8fce7-ebd3-4ecb-afc9-1061938b9687", 00:18:25.921 "is_configured": true, 00:18:25.921 "data_offset": 2048, 00:18:25.921 "data_size": 63488 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "name": "BaseBdev3", 00:18:25.921 "uuid": "15223e34-2afe-4d9c-9257-191c65db7074", 00:18:25.921 "is_configured": true, 00:18:25.921 "data_offset": 2048, 00:18:25.921 "data_size": 63488 00:18:25.921 }, 00:18:25.921 { 00:18:25.921 "name": "BaseBdev4", 00:18:25.921 "uuid": "9df5e1bc-46d6-4d2e-a7b2-99965c2a7eeb", 00:18:25.921 "is_configured": true, 00:18:25.921 "data_offset": 2048, 00:18:25.921 "data_size": 63488 00:18:25.921 } 00:18:25.921 ] 00:18:25.921 } 00:18:25.921 } 00:18:25.921 }' 00:18:25.921 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.179 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:26.179 BaseBdev2 00:18:26.179 BaseBdev3 00:18:26.179 BaseBdev4' 00:18:26.179 09:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.179 [2024-11-06 09:11:25.208155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.179 [2024-11-06 09:11:25.208190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.179 [2024-11-06 09:11:25.208270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.179 [2024-11-06 09:11:25.208364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.179 [2024-11-06 09:11:25.208377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71708 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 71708 ']' 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 71708 00:18:26.179 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71708 00:18:26.437 killing process with pid 71708 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71708' 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 71708 00:18:26.437 [2024-11-06 09:11:25.258915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.437 09:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 71708 00:18:26.705 [2024-11-06 09:11:25.654721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.088 09:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:28.088 00:18:28.088 real 0m11.226s 00:18:28.088 user 0m17.733s 00:18:28.088 sys 0m2.284s 00:18:28.088 ************************************ 00:18:28.088 END TEST raid_state_function_test_sb 00:18:28.088 ************************************ 00:18:28.088 09:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:28.088 09:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.088 09:11:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:28.088 09:11:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:28.088 09:11:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:28.088 09:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.088 ************************************ 00:18:28.088 START TEST raid_superblock_test 00:18:28.088 ************************************ 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:28.088 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72375 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72375 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72375 ']' 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.089 09:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.089 [2024-11-06 09:11:26.955962] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:28.089 [2024-11-06 09:11:26.956095] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72375 ] 00:18:28.346 [2024-11-06 09:11:27.137229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.346 [2024-11-06 09:11:27.254030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.635 [2024-11-06 09:11:27.452643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.635 [2024-11-06 09:11:27.452705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.893 malloc1 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.893 [2024-11-06 09:11:27.858084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.893 [2024-11-06 09:11:27.858283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.893 [2024-11-06 09:11:27.858351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.893 [2024-11-06 09:11:27.858438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.893 [2024-11-06 09:11:27.860877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.893 [2024-11-06 09:11:27.861024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.893 pt1 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.893 malloc2 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.893 [2024-11-06 09:11:27.915825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.893 [2024-11-06 09:11:27.915983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.893 [2024-11-06 09:11:27.916043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.893 [2024-11-06 09:11:27.916118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.893 [2024-11-06 09:11:27.918473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.893 [2024-11-06 09:11:27.918601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.893 pt2 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.893 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.152 malloc3 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.152 [2024-11-06 09:11:27.984625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:29.152 [2024-11-06 09:11:27.984787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.152 [2024-11-06 09:11:27.984849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:29.152 [2024-11-06 09:11:27.984971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.152 [2024-11-06 09:11:27.987432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.152 [2024-11-06 09:11:27.987563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:29.152 pt3 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.152 09:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.152 malloc4 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.152 [2024-11-06 09:11:28.042819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:29.152 [2024-11-06 09:11:28.042871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.152 [2024-11-06 09:11:28.042892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:29.152 [2024-11-06 09:11:28.042903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.152 [2024-11-06 09:11:28.045212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.152 [2024-11-06 09:11:28.045251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:29.152 pt4 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.152 [2024-11-06 09:11:28.054837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:29.152 [2024-11-06 09:11:28.056867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.152 [2024-11-06 09:11:28.056929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:29.152 [2024-11-06 09:11:28.056991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:29.152 [2024-11-06 09:11:28.057192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:29.152 [2024-11-06 09:11:28.057206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:29.152 [2024-11-06 09:11:28.057505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:29.152 [2024-11-06 09:11:28.057659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:29.152 [2024-11-06 09:11:28.057746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:29.152 [2024-11-06 09:11:28.057923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.152 "name": "raid_bdev1", 00:18:29.152 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:29.152 "strip_size_kb": 64, 00:18:29.152 "state": "online", 00:18:29.152 "raid_level": "concat", 00:18:29.152 "superblock": true, 00:18:29.152 "num_base_bdevs": 4, 00:18:29.152 "num_base_bdevs_discovered": 4, 00:18:29.152 "num_base_bdevs_operational": 4, 00:18:29.152 "base_bdevs_list": [ 00:18:29.152 { 00:18:29.152 "name": "pt1", 00:18:29.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.152 "is_configured": true, 00:18:29.152 "data_offset": 2048, 00:18:29.152 "data_size": 63488 00:18:29.152 }, 00:18:29.152 { 00:18:29.152 "name": "pt2", 00:18:29.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.152 "is_configured": true, 00:18:29.152 "data_offset": 2048, 00:18:29.152 "data_size": 63488 00:18:29.152 }, 00:18:29.152 { 00:18:29.152 "name": "pt3", 00:18:29.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.152 "is_configured": true, 00:18:29.152 "data_offset": 2048, 00:18:29.152 "data_size": 63488 00:18:29.152 }, 00:18:29.152 { 00:18:29.152 "name": "pt4", 00:18:29.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:29.152 "is_configured": true, 00:18:29.152 "data_offset": 2048, 00:18:29.152 "data_size": 63488 00:18:29.152 } 00:18:29.152 ] 00:18:29.152 }' 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.152 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.718 [2024-11-06 09:11:28.506509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.718 "name": "raid_bdev1", 00:18:29.718 "aliases": [ 00:18:29.718 "c97d7b6e-00af-4625-89d9-c1dd20433896" 00:18:29.718 ], 00:18:29.718 "product_name": "Raid Volume", 00:18:29.718 "block_size": 512, 00:18:29.718 "num_blocks": 253952, 00:18:29.718 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:29.718 "assigned_rate_limits": { 00:18:29.718 "rw_ios_per_sec": 0, 00:18:29.718 "rw_mbytes_per_sec": 0, 00:18:29.718 "r_mbytes_per_sec": 0, 00:18:29.718 "w_mbytes_per_sec": 0 00:18:29.718 }, 00:18:29.718 "claimed": false, 00:18:29.718 "zoned": false, 00:18:29.718 "supported_io_types": { 00:18:29.718 "read": true, 00:18:29.718 "write": true, 00:18:29.718 "unmap": true, 00:18:29.718 "flush": true, 00:18:29.718 "reset": true, 00:18:29.718 "nvme_admin": false, 00:18:29.718 "nvme_io": false, 00:18:29.718 "nvme_io_md": false, 00:18:29.718 "write_zeroes": true, 00:18:29.718 "zcopy": false, 00:18:29.718 "get_zone_info": false, 00:18:29.718 "zone_management": false, 00:18:29.718 "zone_append": false, 00:18:29.718 "compare": false, 00:18:29.718 "compare_and_write": false, 00:18:29.718 "abort": false, 00:18:29.718 "seek_hole": false, 00:18:29.718 "seek_data": false, 00:18:29.718 "copy": false, 00:18:29.718 "nvme_iov_md": false 00:18:29.718 }, 00:18:29.718 "memory_domains": [ 00:18:29.718 { 00:18:29.718 "dma_device_id": "system", 00:18:29.718 "dma_device_type": 1 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.718 "dma_device_type": 2 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "system", 00:18:29.718 "dma_device_type": 1 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.718 "dma_device_type": 2 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "system", 00:18:29.718 "dma_device_type": 1 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.718 "dma_device_type": 2 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "system", 00:18:29.718 "dma_device_type": 1 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.718 "dma_device_type": 2 00:18:29.718 } 00:18:29.718 ], 00:18:29.718 "driver_specific": { 00:18:29.718 "raid": { 00:18:29.718 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:29.718 "strip_size_kb": 64, 00:18:29.718 "state": "online", 00:18:29.718 "raid_level": "concat", 00:18:29.718 "superblock": true, 00:18:29.718 "num_base_bdevs": 4, 00:18:29.718 "num_base_bdevs_discovered": 4, 00:18:29.718 "num_base_bdevs_operational": 4, 00:18:29.718 "base_bdevs_list": [ 00:18:29.718 { 00:18:29.718 "name": "pt1", 00:18:29.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.718 "is_configured": true, 00:18:29.718 "data_offset": 2048, 00:18:29.718 "data_size": 63488 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "name": "pt2", 00:18:29.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.718 "is_configured": true, 00:18:29.718 "data_offset": 2048, 00:18:29.718 "data_size": 63488 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "name": "pt3", 00:18:29.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.718 "is_configured": true, 00:18:29.718 "data_offset": 2048, 00:18:29.718 "data_size": 63488 00:18:29.718 }, 00:18:29.718 { 00:18:29.718 "name": "pt4", 00:18:29.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:29.718 "is_configured": true, 00:18:29.718 "data_offset": 2048, 00:18:29.718 "data_size": 63488 00:18:29.718 } 00:18:29.718 ] 00:18:29.718 } 00:18:29.718 } 00:18:29.718 }' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:29.718 pt2 00:18:29.718 pt3 00:18:29.718 pt4' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.718 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.719 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.719 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:29.719 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.719 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.719 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.719 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 [2024-11-06 09:11:28.818238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c97d7b6e-00af-4625-89d9-c1dd20433896 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c97d7b6e-00af-4625-89d9-c1dd20433896 ']' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 [2024-11-06 09:11:28.861931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.977 [2024-11-06 09:11:28.861959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.977 [2024-11-06 09:11:28.862049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.977 [2024-11-06 09:11:28.862126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.977 [2024-11-06 09:11:28.862145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:29.977 09:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.977 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:30.236 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.236 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:30.236 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.236 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.236 [2024-11-06 09:11:29.021913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:30.236 [2024-11-06 09:11:29.024026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:30.236 [2024-11-06 09:11:29.024070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:30.236 [2024-11-06 09:11:29.024104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:30.236 [2024-11-06 09:11:29.024153] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:30.236 [2024-11-06 09:11:29.024208] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:30.236 [2024-11-06 09:11:29.024230] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:30.236 [2024-11-06 09:11:29.024251] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:30.236 [2024-11-06 09:11:29.024268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.236 [2024-11-06 09:11:29.024296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:30.236 request: 00:18:30.236 { 00:18:30.236 "name": "raid_bdev1", 00:18:30.236 "raid_level": "concat", 00:18:30.236 "base_bdevs": [ 00:18:30.236 "malloc1", 00:18:30.237 "malloc2", 00:18:30.237 "malloc3", 00:18:30.237 "malloc4" 00:18:30.237 ], 00:18:30.237 "strip_size_kb": 64, 00:18:30.237 "superblock": false, 00:18:30.237 "method": "bdev_raid_create", 00:18:30.237 "req_id": 1 00:18:30.237 } 00:18:30.237 Got JSON-RPC error response 00:18:30.237 response: 00:18:30.237 { 00:18:30.237 "code": -17, 00:18:30.237 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:30.237 } 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.237 [2024-11-06 09:11:29.089783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.237 [2024-11-06 09:11:29.089850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.237 [2024-11-06 09:11:29.089871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:30.237 [2024-11-06 09:11:29.089885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.237 [2024-11-06 09:11:29.092321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.237 [2024-11-06 09:11:29.092364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.237 [2024-11-06 09:11:29.092441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:30.237 [2024-11-06 09:11:29.092503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.237 pt1 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.237 "name": "raid_bdev1", 00:18:30.237 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:30.237 "strip_size_kb": 64, 00:18:30.237 "state": "configuring", 00:18:30.237 "raid_level": "concat", 00:18:30.237 "superblock": true, 00:18:30.237 "num_base_bdevs": 4, 00:18:30.237 "num_base_bdevs_discovered": 1, 00:18:30.237 "num_base_bdevs_operational": 4, 00:18:30.237 "base_bdevs_list": [ 00:18:30.237 { 00:18:30.237 "name": "pt1", 00:18:30.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.237 "is_configured": true, 00:18:30.237 "data_offset": 2048, 00:18:30.237 "data_size": 63488 00:18:30.237 }, 00:18:30.237 { 00:18:30.237 "name": null, 00:18:30.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.237 "is_configured": false, 00:18:30.237 "data_offset": 2048, 00:18:30.237 "data_size": 63488 00:18:30.237 }, 00:18:30.237 { 00:18:30.237 "name": null, 00:18:30.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.237 "is_configured": false, 00:18:30.237 "data_offset": 2048, 00:18:30.237 "data_size": 63488 00:18:30.237 }, 00:18:30.237 { 00:18:30.237 "name": null, 00:18:30.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:30.237 "is_configured": false, 00:18:30.237 "data_offset": 2048, 00:18:30.237 "data_size": 63488 00:18:30.237 } 00:18:30.237 ] 00:18:30.237 }' 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.237 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.495 [2024-11-06 09:11:29.509433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.495 [2024-11-06 09:11:29.509628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.495 [2024-11-06 09:11:29.509687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:30.495 [2024-11-06 09:11:29.509770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.495 [2024-11-06 09:11:29.510264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.495 [2024-11-06 09:11:29.510442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.495 [2024-11-06 09:11:29.510611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:30.495 [2024-11-06 09:11:29.510715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.495 pt2 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.495 [2024-11-06 09:11:29.521425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.495 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.752 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.752 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.752 "name": "raid_bdev1", 00:18:30.752 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:30.752 "strip_size_kb": 64, 00:18:30.752 "state": "configuring", 00:18:30.752 "raid_level": "concat", 00:18:30.752 "superblock": true, 00:18:30.752 "num_base_bdevs": 4, 00:18:30.752 "num_base_bdevs_discovered": 1, 00:18:30.752 "num_base_bdevs_operational": 4, 00:18:30.752 "base_bdevs_list": [ 00:18:30.752 { 00:18:30.752 "name": "pt1", 00:18:30.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.752 "is_configured": true, 00:18:30.752 "data_offset": 2048, 00:18:30.752 "data_size": 63488 00:18:30.752 }, 00:18:30.752 { 00:18:30.752 "name": null, 00:18:30.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.752 "is_configured": false, 00:18:30.752 "data_offset": 0, 00:18:30.752 "data_size": 63488 00:18:30.752 }, 00:18:30.752 { 00:18:30.752 "name": null, 00:18:30.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.752 "is_configured": false, 00:18:30.752 "data_offset": 2048, 00:18:30.752 "data_size": 63488 00:18:30.752 }, 00:18:30.752 { 00:18:30.752 "name": null, 00:18:30.752 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:30.752 "is_configured": false, 00:18:30.752 "data_offset": 2048, 00:18:30.752 "data_size": 63488 00:18:30.752 } 00:18:30.752 ] 00:18:30.752 }' 00:18:30.752 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.752 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 [2024-11-06 09:11:29.877328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.011 [2024-11-06 09:11:29.877397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.011 [2024-11-06 09:11:29.877422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:31.011 [2024-11-06 09:11:29.877434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.011 [2024-11-06 09:11:29.877927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.011 [2024-11-06 09:11:29.877948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.011 [2024-11-06 09:11:29.878037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.011 [2024-11-06 09:11:29.878062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.011 pt2 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 [2024-11-06 09:11:29.889305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.011 [2024-11-06 09:11:29.889498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.011 [2024-11-06 09:11:29.889540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:31.011 [2024-11-06 09:11:29.889560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.011 [2024-11-06 09:11:29.890015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.011 [2024-11-06 09:11:29.890043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.011 [2024-11-06 09:11:29.890133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:31.011 [2024-11-06 09:11:29.890157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.011 pt3 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 [2024-11-06 09:11:29.901241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:31.011 [2024-11-06 09:11:29.901410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.011 [2024-11-06 09:11:29.901439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:31.011 [2024-11-06 09:11:29.901451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.011 [2024-11-06 09:11:29.901851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.011 [2024-11-06 09:11:29.901870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:31.011 [2024-11-06 09:11:29.901935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:31.011 [2024-11-06 09:11:29.901954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:31.011 [2024-11-06 09:11:29.902100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:31.011 [2024-11-06 09:11:29.902109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:31.011 [2024-11-06 09:11:29.902362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.011 [2024-11-06 09:11:29.902499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:31.011 [2024-11-06 09:11:29.902512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:31.011 [2024-11-06 09:11:29.902631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.011 pt4 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.011 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.012 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.012 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.012 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.012 "name": "raid_bdev1", 00:18:31.012 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:31.012 "strip_size_kb": 64, 00:18:31.012 "state": "online", 00:18:31.012 "raid_level": "concat", 00:18:31.012 "superblock": true, 00:18:31.012 "num_base_bdevs": 4, 00:18:31.012 "num_base_bdevs_discovered": 4, 00:18:31.012 "num_base_bdevs_operational": 4, 00:18:31.012 "base_bdevs_list": [ 00:18:31.012 { 00:18:31.012 "name": "pt1", 00:18:31.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.012 "is_configured": true, 00:18:31.012 "data_offset": 2048, 00:18:31.012 "data_size": 63488 00:18:31.012 }, 00:18:31.012 { 00:18:31.012 "name": "pt2", 00:18:31.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.012 "is_configured": true, 00:18:31.012 "data_offset": 2048, 00:18:31.012 "data_size": 63488 00:18:31.012 }, 00:18:31.012 { 00:18:31.012 "name": "pt3", 00:18:31.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.012 "is_configured": true, 00:18:31.012 "data_offset": 2048, 00:18:31.012 "data_size": 63488 00:18:31.012 }, 00:18:31.012 { 00:18:31.012 "name": "pt4", 00:18:31.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.012 "is_configured": true, 00:18:31.012 "data_offset": 2048, 00:18:31.012 "data_size": 63488 00:18:31.012 } 00:18:31.012 ] 00:18:31.012 }' 00:18:31.012 09:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.012 09:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.578 [2024-11-06 09:11:30.333010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.578 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:31.578 "name": "raid_bdev1", 00:18:31.578 "aliases": [ 00:18:31.578 "c97d7b6e-00af-4625-89d9-c1dd20433896" 00:18:31.578 ], 00:18:31.578 "product_name": "Raid Volume", 00:18:31.578 "block_size": 512, 00:18:31.578 "num_blocks": 253952, 00:18:31.578 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:31.578 "assigned_rate_limits": { 00:18:31.578 "rw_ios_per_sec": 0, 00:18:31.578 "rw_mbytes_per_sec": 0, 00:18:31.578 "r_mbytes_per_sec": 0, 00:18:31.578 "w_mbytes_per_sec": 0 00:18:31.578 }, 00:18:31.578 "claimed": false, 00:18:31.578 "zoned": false, 00:18:31.578 "supported_io_types": { 00:18:31.578 "read": true, 00:18:31.578 "write": true, 00:18:31.578 "unmap": true, 00:18:31.578 "flush": true, 00:18:31.578 "reset": true, 00:18:31.578 "nvme_admin": false, 00:18:31.578 "nvme_io": false, 00:18:31.578 "nvme_io_md": false, 00:18:31.578 "write_zeroes": true, 00:18:31.578 "zcopy": false, 00:18:31.578 "get_zone_info": false, 00:18:31.578 "zone_management": false, 00:18:31.578 "zone_append": false, 00:18:31.578 "compare": false, 00:18:31.578 "compare_and_write": false, 00:18:31.578 "abort": false, 00:18:31.578 "seek_hole": false, 00:18:31.578 "seek_data": false, 00:18:31.578 "copy": false, 00:18:31.578 "nvme_iov_md": false 00:18:31.578 }, 00:18:31.578 "memory_domains": [ 00:18:31.578 { 00:18:31.578 "dma_device_id": "system", 00:18:31.578 "dma_device_type": 1 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.579 "dma_device_type": 2 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "system", 00:18:31.579 "dma_device_type": 1 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.579 "dma_device_type": 2 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "system", 00:18:31.579 "dma_device_type": 1 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.579 "dma_device_type": 2 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "system", 00:18:31.579 "dma_device_type": 1 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.579 "dma_device_type": 2 00:18:31.579 } 00:18:31.579 ], 00:18:31.579 "driver_specific": { 00:18:31.579 "raid": { 00:18:31.579 "uuid": "c97d7b6e-00af-4625-89d9-c1dd20433896", 00:18:31.579 "strip_size_kb": 64, 00:18:31.579 "state": "online", 00:18:31.579 "raid_level": "concat", 00:18:31.579 "superblock": true, 00:18:31.579 "num_base_bdevs": 4, 00:18:31.579 "num_base_bdevs_discovered": 4, 00:18:31.579 "num_base_bdevs_operational": 4, 00:18:31.579 "base_bdevs_list": [ 00:18:31.579 { 00:18:31.579 "name": "pt1", 00:18:31.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.579 "is_configured": true, 00:18:31.579 "data_offset": 2048, 00:18:31.579 "data_size": 63488 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "name": "pt2", 00:18:31.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.579 "is_configured": true, 00:18:31.579 "data_offset": 2048, 00:18:31.579 "data_size": 63488 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "name": "pt3", 00:18:31.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.579 "is_configured": true, 00:18:31.579 "data_offset": 2048, 00:18:31.579 "data_size": 63488 00:18:31.579 }, 00:18:31.579 { 00:18:31.579 "name": "pt4", 00:18:31.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:31.579 "is_configured": true, 00:18:31.579 "data_offset": 2048, 00:18:31.579 "data_size": 63488 00:18:31.579 } 00:18:31.579 ] 00:18:31.579 } 00:18:31.579 } 00:18:31.579 }' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:31.579 pt2 00:18:31.579 pt3 00:18:31.579 pt4' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.579 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:31.837 [2024-11-06 09:11:30.656567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c97d7b6e-00af-4625-89d9-c1dd20433896 '!=' c97d7b6e-00af-4625-89d9-c1dd20433896 ']' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72375 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72375 ']' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72375 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72375 00:18:31.837 killing process with pid 72375 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72375' 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72375 00:18:31.837 [2024-11-06 09:11:30.731652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.837 [2024-11-06 09:11:30.731751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.837 09:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72375 00:18:31.837 [2024-11-06 09:11:30.731829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.837 [2024-11-06 09:11:30.731841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:32.402 [2024-11-06 09:11:31.159469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.341 09:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:33.341 00:18:33.341 real 0m5.493s 00:18:33.341 user 0m7.806s 00:18:33.341 sys 0m1.096s 00:18:33.341 09:11:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:33.341 ************************************ 00:18:33.341 END TEST raid_superblock_test 00:18:33.341 ************************************ 00:18:33.341 09:11:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.600 09:11:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:18:33.600 09:11:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:33.600 09:11:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:33.600 09:11:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.600 ************************************ 00:18:33.600 START TEST raid_read_error_test 00:18:33.600 ************************************ 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gbz4i6xRyJ 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72634 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72634 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 72634 ']' 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:33.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:33.600 09:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.600 [2024-11-06 09:11:32.549295] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:33.600 [2024-11-06 09:11:32.549656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:18:33.858 [2024-11-06 09:11:32.733923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.858 [2024-11-06 09:11:32.863985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.117 [2024-11-06 09:11:33.087412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.117 [2024-11-06 09:11:33.087666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.683 BaseBdev1_malloc 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.683 true 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.683 [2024-11-06 09:11:33.518833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:34.683 [2024-11-06 09:11:33.518920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.683 [2024-11-06 09:11:33.518956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:34.683 [2024-11-06 09:11:33.518981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.683 [2024-11-06 09:11:33.521771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.683 [2024-11-06 09:11:33.521838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.683 BaseBdev1 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.683 BaseBdev2_malloc 00:18:34.683 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 true 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 [2024-11-06 09:11:33.590551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:34.684 [2024-11-06 09:11:33.590760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.684 [2024-11-06 09:11:33.590791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:34.684 [2024-11-06 09:11:33.590806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.684 [2024-11-06 09:11:33.593444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.684 [2024-11-06 09:11:33.593499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.684 BaseBdev2 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 BaseBdev3_malloc 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 true 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 [2024-11-06 09:11:33.672947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:34.684 [2024-11-06 09:11:33.673158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.684 [2024-11-06 09:11:33.673192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:34.684 [2024-11-06 09:11:33.673208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.684 [2024-11-06 09:11:33.675872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.684 [2024-11-06 09:11:33.675917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.684 BaseBdev3 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.684 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.943 BaseBdev4_malloc 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.943 true 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.943 [2024-11-06 09:11:33.743769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:34.943 [2024-11-06 09:11:33.743834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.943 [2024-11-06 09:11:33.743858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:34.943 [2024-11-06 09:11:33.743873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.943 [2024-11-06 09:11:33.746503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.943 [2024-11-06 09:11:33.746685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:34.943 BaseBdev4 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.943 [2024-11-06 09:11:33.755835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.943 [2024-11-06 09:11:33.758135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.943 [2024-11-06 09:11:33.758370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.943 [2024-11-06 09:11:33.758456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.943 [2024-11-06 09:11:33.758699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:34.943 [2024-11-06 09:11:33.758717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:34.943 [2024-11-06 09:11:33.759032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:34.943 [2024-11-06 09:11:33.759202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:34.943 [2024-11-06 09:11:33.759216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:34.943 [2024-11-06 09:11:33.759407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.943 "name": "raid_bdev1", 00:18:34.943 "uuid": "eede3baa-4502-4379-83a3-b9a3dd948da3", 00:18:34.943 "strip_size_kb": 64, 00:18:34.943 "state": "online", 00:18:34.943 "raid_level": "concat", 00:18:34.943 "superblock": true, 00:18:34.943 "num_base_bdevs": 4, 00:18:34.943 "num_base_bdevs_discovered": 4, 00:18:34.943 "num_base_bdevs_operational": 4, 00:18:34.943 "base_bdevs_list": [ 00:18:34.943 { 00:18:34.943 "name": "BaseBdev1", 00:18:34.943 "uuid": "f78d7285-be1e-5a07-9b80-ff00dac07767", 00:18:34.943 "is_configured": true, 00:18:34.943 "data_offset": 2048, 00:18:34.943 "data_size": 63488 00:18:34.943 }, 00:18:34.943 { 00:18:34.943 "name": "BaseBdev2", 00:18:34.943 "uuid": "a61e5f78-840f-5589-9f0e-c91494562af0", 00:18:34.943 "is_configured": true, 00:18:34.943 "data_offset": 2048, 00:18:34.943 "data_size": 63488 00:18:34.943 }, 00:18:34.943 { 00:18:34.943 "name": "BaseBdev3", 00:18:34.943 "uuid": "0c29df88-6b2d-59d8-a8c1-9536f985b872", 00:18:34.943 "is_configured": true, 00:18:34.943 "data_offset": 2048, 00:18:34.943 "data_size": 63488 00:18:34.943 }, 00:18:34.943 { 00:18:34.943 "name": "BaseBdev4", 00:18:34.943 "uuid": "acb57e58-44dc-5d87-96c4-eecae4549c53", 00:18:34.943 "is_configured": true, 00:18:34.943 "data_offset": 2048, 00:18:34.943 "data_size": 63488 00:18:34.943 } 00:18:34.943 ] 00:18:34.943 }' 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.943 09:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.202 09:11:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:35.202 09:11:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:35.459 [2024-11-06 09:11:34.284696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:36.425 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:36.425 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.425 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.426 "name": "raid_bdev1", 00:18:36.426 "uuid": "eede3baa-4502-4379-83a3-b9a3dd948da3", 00:18:36.426 "strip_size_kb": 64, 00:18:36.426 "state": "online", 00:18:36.426 "raid_level": "concat", 00:18:36.426 "superblock": true, 00:18:36.426 "num_base_bdevs": 4, 00:18:36.426 "num_base_bdevs_discovered": 4, 00:18:36.426 "num_base_bdevs_operational": 4, 00:18:36.426 "base_bdevs_list": [ 00:18:36.426 { 00:18:36.426 "name": "BaseBdev1", 00:18:36.426 "uuid": "f78d7285-be1e-5a07-9b80-ff00dac07767", 00:18:36.426 "is_configured": true, 00:18:36.426 "data_offset": 2048, 00:18:36.426 "data_size": 63488 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "name": "BaseBdev2", 00:18:36.426 "uuid": "a61e5f78-840f-5589-9f0e-c91494562af0", 00:18:36.426 "is_configured": true, 00:18:36.426 "data_offset": 2048, 00:18:36.426 "data_size": 63488 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "name": "BaseBdev3", 00:18:36.426 "uuid": "0c29df88-6b2d-59d8-a8c1-9536f985b872", 00:18:36.426 "is_configured": true, 00:18:36.426 "data_offset": 2048, 00:18:36.426 "data_size": 63488 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "name": "BaseBdev4", 00:18:36.426 "uuid": "acb57e58-44dc-5d87-96c4-eecae4549c53", 00:18:36.426 "is_configured": true, 00:18:36.426 "data_offset": 2048, 00:18:36.426 "data_size": 63488 00:18:36.426 } 00:18:36.426 ] 00:18:36.426 }' 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.426 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.683 [2024-11-06 09:11:35.596172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.683 [2024-11-06 09:11:35.596224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.683 [2024-11-06 09:11:35.599366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.683 [2024-11-06 09:11:35.599565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.683 [2024-11-06 09:11:35.599717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.683 [2024-11-06 09:11:35.599848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:36.683 { 00:18:36.683 "results": [ 00:18:36.683 { 00:18:36.683 "job": "raid_bdev1", 00:18:36.683 "core_mask": "0x1", 00:18:36.683 "workload": "randrw", 00:18:36.683 "percentage": 50, 00:18:36.683 "status": "finished", 00:18:36.683 "queue_depth": 1, 00:18:36.683 "io_size": 131072, 00:18:36.683 "runtime": 1.311085, 00:18:36.683 "iops": 14581.053097243886, 00:18:36.683 "mibps": 1822.6316371554858, 00:18:36.683 "io_failed": 1, 00:18:36.683 "io_timeout": 0, 00:18:36.683 "avg_latency_us": 95.0205317136314, 00:18:36.683 "min_latency_us": 28.37590361445783, 00:18:36.683 "max_latency_us": 1546.2811244979919 00:18:36.683 } 00:18:36.683 ], 00:18:36.683 "core_count": 1 00:18:36.683 } 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72634 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 72634 ']' 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 72634 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72634 00:18:36.683 killing process with pid 72634 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72634' 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 72634 00:18:36.683 [2024-11-06 09:11:35.637176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.683 09:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 72634 00:18:36.942 [2024-11-06 09:11:35.968835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gbz4i6xRyJ 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:18:38.318 00:18:38.318 real 0m4.735s 00:18:38.318 user 0m5.538s 00:18:38.318 sys 0m0.646s 00:18:38.318 ************************************ 00:18:38.318 END TEST raid_read_error_test 00:18:38.318 ************************************ 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:38.318 09:11:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.318 09:11:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:18:38.318 09:11:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:38.318 09:11:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:38.318 09:11:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.318 ************************************ 00:18:38.318 START TEST raid_write_error_test 00:18:38.318 ************************************ 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.psYZ2rVmIv 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72784 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72784 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 72784 ']' 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:38.318 09:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.318 [2024-11-06 09:11:37.352877] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:38.318 [2024-11-06 09:11:37.353184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72784 ] 00:18:38.581 [2024-11-06 09:11:37.524109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.840 [2024-11-06 09:11:37.645041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.840 [2024-11-06 09:11:37.857704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.840 [2024-11-06 09:11:37.857742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.407 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:39.407 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:39.407 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:39.407 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:39.407 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 BaseBdev1_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 true 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 [2024-11-06 09:11:38.298553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:39.408 [2024-11-06 09:11:38.298628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.408 [2024-11-06 09:11:38.298653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:39.408 [2024-11-06 09:11:38.298669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.408 [2024-11-06 09:11:38.301453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.408 [2024-11-06 09:11:38.301506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.408 BaseBdev1 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 BaseBdev2_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 true 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 [2024-11-06 09:11:38.368761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:39.408 [2024-11-06 09:11:38.368833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.408 [2024-11-06 09:11:38.368853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:39.408 [2024-11-06 09:11:38.368867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.408 [2024-11-06 09:11:38.371257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.408 [2024-11-06 09:11:38.371313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:39.408 BaseBdev2 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 BaseBdev3_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.408 true 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.408 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 [2024-11-06 09:11:38.451659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:39.668 [2024-11-06 09:11:38.451728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.668 [2024-11-06 09:11:38.451749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:39.668 [2024-11-06 09:11:38.451763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.668 [2024-11-06 09:11:38.454163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.668 [2024-11-06 09:11:38.454211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:39.668 BaseBdev3 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 BaseBdev4_malloc 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 true 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 [2024-11-06 09:11:38.522970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:39.668 [2024-11-06 09:11:38.523042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.668 [2024-11-06 09:11:38.523065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:39.668 [2024-11-06 09:11:38.523080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.668 [2024-11-06 09:11:38.525511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.668 [2024-11-06 09:11:38.525559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:39.668 BaseBdev4 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 [2024-11-06 09:11:38.535027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.668 [2024-11-06 09:11:38.537313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.668 [2024-11-06 09:11:38.537506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.668 [2024-11-06 09:11:38.537612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.668 [2024-11-06 09:11:38.537966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:39.668 [2024-11-06 09:11:38.538092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:39.668 [2024-11-06 09:11:38.538501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:39.668 [2024-11-06 09:11:38.538786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:39.668 [2024-11-06 09:11:38.538894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:39.668 [2024-11-06 09:11:38.539219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.668 "name": "raid_bdev1", 00:18:39.668 "uuid": "71cdbbcb-1378-4f6e-95c2-2beba0cb8d68", 00:18:39.668 "strip_size_kb": 64, 00:18:39.668 "state": "online", 00:18:39.668 "raid_level": "concat", 00:18:39.668 "superblock": true, 00:18:39.668 "num_base_bdevs": 4, 00:18:39.668 "num_base_bdevs_discovered": 4, 00:18:39.668 "num_base_bdevs_operational": 4, 00:18:39.668 "base_bdevs_list": [ 00:18:39.668 { 00:18:39.668 "name": "BaseBdev1", 00:18:39.668 "uuid": "a5993705-e55c-5b5e-957f-39562e48093e", 00:18:39.668 "is_configured": true, 00:18:39.668 "data_offset": 2048, 00:18:39.668 "data_size": 63488 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "name": "BaseBdev2", 00:18:39.668 "uuid": "a58d9277-d176-5766-94b4-a64aa8a25c0b", 00:18:39.668 "is_configured": true, 00:18:39.668 "data_offset": 2048, 00:18:39.668 "data_size": 63488 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "name": "BaseBdev3", 00:18:39.668 "uuid": "3f094d9b-dd85-5b95-a923-3dd3bc9cc48d", 00:18:39.668 "is_configured": true, 00:18:39.668 "data_offset": 2048, 00:18:39.668 "data_size": 63488 00:18:39.668 }, 00:18:39.668 { 00:18:39.668 "name": "BaseBdev4", 00:18:39.668 "uuid": "dec4bf85-b57d-5271-9189-e9a4bf3f56d7", 00:18:39.668 "is_configured": true, 00:18:39.668 "data_offset": 2048, 00:18:39.668 "data_size": 63488 00:18:39.668 } 00:18:39.668 ] 00:18:39.668 }' 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.668 09:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.236 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:40.236 09:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:40.236 [2024-11-06 09:11:39.103795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.185 "name": "raid_bdev1", 00:18:41.185 "uuid": "71cdbbcb-1378-4f6e-95c2-2beba0cb8d68", 00:18:41.185 "strip_size_kb": 64, 00:18:41.185 "state": "online", 00:18:41.185 "raid_level": "concat", 00:18:41.185 "superblock": true, 00:18:41.185 "num_base_bdevs": 4, 00:18:41.185 "num_base_bdevs_discovered": 4, 00:18:41.185 "num_base_bdevs_operational": 4, 00:18:41.185 "base_bdevs_list": [ 00:18:41.185 { 00:18:41.185 "name": "BaseBdev1", 00:18:41.185 "uuid": "a5993705-e55c-5b5e-957f-39562e48093e", 00:18:41.185 "is_configured": true, 00:18:41.185 "data_offset": 2048, 00:18:41.185 "data_size": 63488 00:18:41.185 }, 00:18:41.185 { 00:18:41.185 "name": "BaseBdev2", 00:18:41.185 "uuid": "a58d9277-d176-5766-94b4-a64aa8a25c0b", 00:18:41.185 "is_configured": true, 00:18:41.185 "data_offset": 2048, 00:18:41.185 "data_size": 63488 00:18:41.185 }, 00:18:41.185 { 00:18:41.185 "name": "BaseBdev3", 00:18:41.185 "uuid": "3f094d9b-dd85-5b95-a923-3dd3bc9cc48d", 00:18:41.185 "is_configured": true, 00:18:41.185 "data_offset": 2048, 00:18:41.185 "data_size": 63488 00:18:41.185 }, 00:18:41.185 { 00:18:41.185 "name": "BaseBdev4", 00:18:41.185 "uuid": "dec4bf85-b57d-5271-9189-e9a4bf3f56d7", 00:18:41.185 "is_configured": true, 00:18:41.185 "data_offset": 2048, 00:18:41.185 "data_size": 63488 00:18:41.185 } 00:18:41.185 ] 00:18:41.185 }' 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.185 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.444 [2024-11-06 09:11:40.452384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.444 [2024-11-06 09:11:40.452419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.444 [2024-11-06 09:11:40.454986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.444 [2024-11-06 09:11:40.455049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.444 [2024-11-06 09:11:40.455094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.444 [2024-11-06 09:11:40.455112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:41.444 { 00:18:41.444 "results": [ 00:18:41.444 { 00:18:41.444 "job": "raid_bdev1", 00:18:41.444 "core_mask": "0x1", 00:18:41.444 "workload": "randrw", 00:18:41.444 "percentage": 50, 00:18:41.444 "status": "finished", 00:18:41.444 "queue_depth": 1, 00:18:41.444 "io_size": 131072, 00:18:41.444 "runtime": 1.348633, 00:18:41.444 "iops": 15768.559719360272, 00:18:41.444 "mibps": 1971.069964920034, 00:18:41.444 "io_failed": 1, 00:18:41.444 "io_timeout": 0, 00:18:41.444 "avg_latency_us": 87.90327122190742, 00:18:41.444 "min_latency_us": 26.936546184738955, 00:18:41.444 "max_latency_us": 1454.1622489959839 00:18:41.444 } 00:18:41.444 ], 00:18:41.444 "core_count": 1 00:18:41.444 } 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72784 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 72784 ']' 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 72784 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.444 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72784 00:18:41.703 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.703 killing process with pid 72784 00:18:41.703 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.703 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72784' 00:18:41.703 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 72784 00:18:41.703 [2024-11-06 09:11:40.506502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.703 09:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 72784 00:18:41.962 [2024-11-06 09:11:40.836483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.psYZ2rVmIv 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:43.406 ************************************ 00:18:43.406 END TEST raid_write_error_test 00:18:43.406 ************************************ 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:18:43.406 00:18:43.406 real 0m4.793s 00:18:43.406 user 0m5.700s 00:18:43.406 sys 0m0.623s 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:43.406 09:11:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.406 09:11:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:43.406 09:11:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:43.406 09:11:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:43.406 09:11:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:43.406 09:11:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.406 ************************************ 00:18:43.406 START TEST raid_state_function_test 00:18:43.406 ************************************ 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:43.406 Process raid pid: 72930 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72930 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72930' 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72930 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 72930 ']' 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:43.406 09:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.406 [2024-11-06 09:11:42.271555] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:43.406 [2024-11-06 09:11:42.271904] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.665 [2024-11-06 09:11:42.452796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.665 [2024-11-06 09:11:42.563071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.924 [2024-11-06 09:11:42.781310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.924 [2024-11-06 09:11:42.781341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.182 [2024-11-06 09:11:43.113793] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.182 [2024-11-06 09:11:43.113856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.182 [2024-11-06 09:11:43.113868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.182 [2024-11-06 09:11:43.113882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.182 [2024-11-06 09:11:43.113889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:44.182 [2024-11-06 09:11:43.113901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:44.182 [2024-11-06 09:11:43.113909] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:44.182 [2024-11-06 09:11:43.113922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.182 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.183 "name": "Existed_Raid", 00:18:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.183 "strip_size_kb": 0, 00:18:44.183 "state": "configuring", 00:18:44.183 "raid_level": "raid1", 00:18:44.183 "superblock": false, 00:18:44.183 "num_base_bdevs": 4, 00:18:44.183 "num_base_bdevs_discovered": 0, 00:18:44.183 "num_base_bdevs_operational": 4, 00:18:44.183 "base_bdevs_list": [ 00:18:44.183 { 00:18:44.183 "name": "BaseBdev1", 00:18:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.183 "is_configured": false, 00:18:44.183 "data_offset": 0, 00:18:44.183 "data_size": 0 00:18:44.183 }, 00:18:44.183 { 00:18:44.183 "name": "BaseBdev2", 00:18:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.183 "is_configured": false, 00:18:44.183 "data_offset": 0, 00:18:44.183 "data_size": 0 00:18:44.183 }, 00:18:44.183 { 00:18:44.183 "name": "BaseBdev3", 00:18:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.183 "is_configured": false, 00:18:44.183 "data_offset": 0, 00:18:44.183 "data_size": 0 00:18:44.183 }, 00:18:44.183 { 00:18:44.183 "name": "BaseBdev4", 00:18:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.183 "is_configured": false, 00:18:44.183 "data_offset": 0, 00:18:44.183 "data_size": 0 00:18:44.183 } 00:18:44.183 ] 00:18:44.183 }' 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.183 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.748 [2024-11-06 09:11:43.509300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.748 [2024-11-06 09:11:43.509349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.748 [2024-11-06 09:11:43.517247] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.748 [2024-11-06 09:11:43.517316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.748 [2024-11-06 09:11:43.517332] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.748 [2024-11-06 09:11:43.517351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.748 [2024-11-06 09:11:43.517363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:44.748 [2024-11-06 09:11:43.517381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:44.748 [2024-11-06 09:11:43.517393] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:44.748 [2024-11-06 09:11:43.517411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.748 [2024-11-06 09:11:43.567733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.748 BaseBdev1 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.748 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.749 [ 00:18:44.749 { 00:18:44.749 "name": "BaseBdev1", 00:18:44.749 "aliases": [ 00:18:44.749 "37c505ee-3a34-499b-a858-dd52a9b9066c" 00:18:44.749 ], 00:18:44.749 "product_name": "Malloc disk", 00:18:44.749 "block_size": 512, 00:18:44.749 "num_blocks": 65536, 00:18:44.749 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:44.749 "assigned_rate_limits": { 00:18:44.749 "rw_ios_per_sec": 0, 00:18:44.749 "rw_mbytes_per_sec": 0, 00:18:44.749 "r_mbytes_per_sec": 0, 00:18:44.749 "w_mbytes_per_sec": 0 00:18:44.749 }, 00:18:44.749 "claimed": true, 00:18:44.749 "claim_type": "exclusive_write", 00:18:44.749 "zoned": false, 00:18:44.749 "supported_io_types": { 00:18:44.749 "read": true, 00:18:44.749 "write": true, 00:18:44.749 "unmap": true, 00:18:44.749 "flush": true, 00:18:44.749 "reset": true, 00:18:44.749 "nvme_admin": false, 00:18:44.749 "nvme_io": false, 00:18:44.749 "nvme_io_md": false, 00:18:44.749 "write_zeroes": true, 00:18:44.749 "zcopy": true, 00:18:44.749 "get_zone_info": false, 00:18:44.749 "zone_management": false, 00:18:44.749 "zone_append": false, 00:18:44.749 "compare": false, 00:18:44.749 "compare_and_write": false, 00:18:44.749 "abort": true, 00:18:44.749 "seek_hole": false, 00:18:44.749 "seek_data": false, 00:18:44.749 "copy": true, 00:18:44.749 "nvme_iov_md": false 00:18:44.749 }, 00:18:44.749 "memory_domains": [ 00:18:44.749 { 00:18:44.749 "dma_device_id": "system", 00:18:44.749 "dma_device_type": 1 00:18:44.749 }, 00:18:44.749 { 00:18:44.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.749 "dma_device_type": 2 00:18:44.749 } 00:18:44.749 ], 00:18:44.749 "driver_specific": {} 00:18:44.749 } 00:18:44.749 ] 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.749 "name": "Existed_Raid", 00:18:44.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.749 "strip_size_kb": 0, 00:18:44.749 "state": "configuring", 00:18:44.749 "raid_level": "raid1", 00:18:44.749 "superblock": false, 00:18:44.749 "num_base_bdevs": 4, 00:18:44.749 "num_base_bdevs_discovered": 1, 00:18:44.749 "num_base_bdevs_operational": 4, 00:18:44.749 "base_bdevs_list": [ 00:18:44.749 { 00:18:44.749 "name": "BaseBdev1", 00:18:44.749 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:44.749 "is_configured": true, 00:18:44.749 "data_offset": 0, 00:18:44.749 "data_size": 65536 00:18:44.749 }, 00:18:44.749 { 00:18:44.749 "name": "BaseBdev2", 00:18:44.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.749 "is_configured": false, 00:18:44.749 "data_offset": 0, 00:18:44.749 "data_size": 0 00:18:44.749 }, 00:18:44.749 { 00:18:44.749 "name": "BaseBdev3", 00:18:44.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.749 "is_configured": false, 00:18:44.749 "data_offset": 0, 00:18:44.749 "data_size": 0 00:18:44.749 }, 00:18:44.749 { 00:18:44.749 "name": "BaseBdev4", 00:18:44.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.749 "is_configured": false, 00:18:44.749 "data_offset": 0, 00:18:44.749 "data_size": 0 00:18:44.749 } 00:18:44.749 ] 00:18:44.749 }' 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.749 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.006 [2024-11-06 09:11:43.939423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:45.006 [2024-11-06 09:11:43.939482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.006 [2024-11-06 09:11:43.947472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.006 [2024-11-06 09:11:43.949652] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.006 [2024-11-06 09:11:43.949694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.006 [2024-11-06 09:11:43.949706] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:45.006 [2024-11-06 09:11:43.949722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.006 [2024-11-06 09:11:43.949731] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:45.006 [2024-11-06 09:11:43.949744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.006 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.006 "name": "Existed_Raid", 00:18:45.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.006 "strip_size_kb": 0, 00:18:45.007 "state": "configuring", 00:18:45.007 "raid_level": "raid1", 00:18:45.007 "superblock": false, 00:18:45.007 "num_base_bdevs": 4, 00:18:45.007 "num_base_bdevs_discovered": 1, 00:18:45.007 "num_base_bdevs_operational": 4, 00:18:45.007 "base_bdevs_list": [ 00:18:45.007 { 00:18:45.007 "name": "BaseBdev1", 00:18:45.007 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:45.007 "is_configured": true, 00:18:45.007 "data_offset": 0, 00:18:45.007 "data_size": 65536 00:18:45.007 }, 00:18:45.007 { 00:18:45.007 "name": "BaseBdev2", 00:18:45.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.007 "is_configured": false, 00:18:45.007 "data_offset": 0, 00:18:45.007 "data_size": 0 00:18:45.007 }, 00:18:45.007 { 00:18:45.007 "name": "BaseBdev3", 00:18:45.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.007 "is_configured": false, 00:18:45.007 "data_offset": 0, 00:18:45.007 "data_size": 0 00:18:45.007 }, 00:18:45.007 { 00:18:45.007 "name": "BaseBdev4", 00:18:45.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.007 "is_configured": false, 00:18:45.007 "data_offset": 0, 00:18:45.007 "data_size": 0 00:18:45.007 } 00:18:45.007 ] 00:18:45.007 }' 00:18:45.007 09:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.007 09:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.573 [2024-11-06 09:11:44.358628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.573 BaseBdev2 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.573 [ 00:18:45.573 { 00:18:45.573 "name": "BaseBdev2", 00:18:45.573 "aliases": [ 00:18:45.573 "af8c4adb-0ece-4c23-834e-e4be5eb30ab6" 00:18:45.573 ], 00:18:45.573 "product_name": "Malloc disk", 00:18:45.573 "block_size": 512, 00:18:45.573 "num_blocks": 65536, 00:18:45.573 "uuid": "af8c4adb-0ece-4c23-834e-e4be5eb30ab6", 00:18:45.573 "assigned_rate_limits": { 00:18:45.573 "rw_ios_per_sec": 0, 00:18:45.573 "rw_mbytes_per_sec": 0, 00:18:45.573 "r_mbytes_per_sec": 0, 00:18:45.573 "w_mbytes_per_sec": 0 00:18:45.573 }, 00:18:45.573 "claimed": true, 00:18:45.573 "claim_type": "exclusive_write", 00:18:45.573 "zoned": false, 00:18:45.573 "supported_io_types": { 00:18:45.573 "read": true, 00:18:45.573 "write": true, 00:18:45.573 "unmap": true, 00:18:45.573 "flush": true, 00:18:45.573 "reset": true, 00:18:45.573 "nvme_admin": false, 00:18:45.573 "nvme_io": false, 00:18:45.573 "nvme_io_md": false, 00:18:45.573 "write_zeroes": true, 00:18:45.573 "zcopy": true, 00:18:45.573 "get_zone_info": false, 00:18:45.573 "zone_management": false, 00:18:45.573 "zone_append": false, 00:18:45.573 "compare": false, 00:18:45.573 "compare_and_write": false, 00:18:45.573 "abort": true, 00:18:45.573 "seek_hole": false, 00:18:45.573 "seek_data": false, 00:18:45.573 "copy": true, 00:18:45.573 "nvme_iov_md": false 00:18:45.573 }, 00:18:45.573 "memory_domains": [ 00:18:45.573 { 00:18:45.573 "dma_device_id": "system", 00:18:45.573 "dma_device_type": 1 00:18:45.573 }, 00:18:45.573 { 00:18:45.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.573 "dma_device_type": 2 00:18:45.573 } 00:18:45.573 ], 00:18:45.573 "driver_specific": {} 00:18:45.573 } 00:18:45.573 ] 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.573 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.573 "name": "Existed_Raid", 00:18:45.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.573 "strip_size_kb": 0, 00:18:45.573 "state": "configuring", 00:18:45.573 "raid_level": "raid1", 00:18:45.573 "superblock": false, 00:18:45.573 "num_base_bdevs": 4, 00:18:45.573 "num_base_bdevs_discovered": 2, 00:18:45.573 "num_base_bdevs_operational": 4, 00:18:45.573 "base_bdevs_list": [ 00:18:45.573 { 00:18:45.573 "name": "BaseBdev1", 00:18:45.573 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:45.573 "is_configured": true, 00:18:45.573 "data_offset": 0, 00:18:45.573 "data_size": 65536 00:18:45.573 }, 00:18:45.573 { 00:18:45.573 "name": "BaseBdev2", 00:18:45.573 "uuid": "af8c4adb-0ece-4c23-834e-e4be5eb30ab6", 00:18:45.573 "is_configured": true, 00:18:45.573 "data_offset": 0, 00:18:45.573 "data_size": 65536 00:18:45.573 }, 00:18:45.573 { 00:18:45.573 "name": "BaseBdev3", 00:18:45.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.573 "is_configured": false, 00:18:45.573 "data_offset": 0, 00:18:45.573 "data_size": 0 00:18:45.573 }, 00:18:45.573 { 00:18:45.573 "name": "BaseBdev4", 00:18:45.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.573 "is_configured": false, 00:18:45.573 "data_offset": 0, 00:18:45.573 "data_size": 0 00:18:45.573 } 00:18:45.573 ] 00:18:45.574 }' 00:18:45.574 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.574 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.832 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:45.832 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.832 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.091 [2024-11-06 09:11:44.886498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.091 BaseBdev3 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.091 [ 00:18:46.091 { 00:18:46.091 "name": "BaseBdev3", 00:18:46.091 "aliases": [ 00:18:46.091 "8f00ffc6-ec6d-49f3-a69d-1ed0f90c2f54" 00:18:46.091 ], 00:18:46.091 "product_name": "Malloc disk", 00:18:46.091 "block_size": 512, 00:18:46.091 "num_blocks": 65536, 00:18:46.091 "uuid": "8f00ffc6-ec6d-49f3-a69d-1ed0f90c2f54", 00:18:46.091 "assigned_rate_limits": { 00:18:46.091 "rw_ios_per_sec": 0, 00:18:46.091 "rw_mbytes_per_sec": 0, 00:18:46.091 "r_mbytes_per_sec": 0, 00:18:46.091 "w_mbytes_per_sec": 0 00:18:46.091 }, 00:18:46.091 "claimed": true, 00:18:46.091 "claim_type": "exclusive_write", 00:18:46.091 "zoned": false, 00:18:46.091 "supported_io_types": { 00:18:46.091 "read": true, 00:18:46.091 "write": true, 00:18:46.091 "unmap": true, 00:18:46.091 "flush": true, 00:18:46.091 "reset": true, 00:18:46.091 "nvme_admin": false, 00:18:46.091 "nvme_io": false, 00:18:46.091 "nvme_io_md": false, 00:18:46.091 "write_zeroes": true, 00:18:46.091 "zcopy": true, 00:18:46.091 "get_zone_info": false, 00:18:46.091 "zone_management": false, 00:18:46.091 "zone_append": false, 00:18:46.091 "compare": false, 00:18:46.091 "compare_and_write": false, 00:18:46.091 "abort": true, 00:18:46.091 "seek_hole": false, 00:18:46.091 "seek_data": false, 00:18:46.091 "copy": true, 00:18:46.091 "nvme_iov_md": false 00:18:46.091 }, 00:18:46.091 "memory_domains": [ 00:18:46.091 { 00:18:46.091 "dma_device_id": "system", 00:18:46.091 "dma_device_type": 1 00:18:46.091 }, 00:18:46.091 { 00:18:46.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.091 "dma_device_type": 2 00:18:46.091 } 00:18:46.091 ], 00:18:46.091 "driver_specific": {} 00:18:46.091 } 00:18:46.091 ] 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.091 "name": "Existed_Raid", 00:18:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.091 "strip_size_kb": 0, 00:18:46.091 "state": "configuring", 00:18:46.091 "raid_level": "raid1", 00:18:46.091 "superblock": false, 00:18:46.091 "num_base_bdevs": 4, 00:18:46.091 "num_base_bdevs_discovered": 3, 00:18:46.091 "num_base_bdevs_operational": 4, 00:18:46.091 "base_bdevs_list": [ 00:18:46.091 { 00:18:46.091 "name": "BaseBdev1", 00:18:46.091 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:46.091 "is_configured": true, 00:18:46.091 "data_offset": 0, 00:18:46.091 "data_size": 65536 00:18:46.091 }, 00:18:46.091 { 00:18:46.091 "name": "BaseBdev2", 00:18:46.091 "uuid": "af8c4adb-0ece-4c23-834e-e4be5eb30ab6", 00:18:46.091 "is_configured": true, 00:18:46.091 "data_offset": 0, 00:18:46.091 "data_size": 65536 00:18:46.091 }, 00:18:46.091 { 00:18:46.091 "name": "BaseBdev3", 00:18:46.091 "uuid": "8f00ffc6-ec6d-49f3-a69d-1ed0f90c2f54", 00:18:46.091 "is_configured": true, 00:18:46.091 "data_offset": 0, 00:18:46.091 "data_size": 65536 00:18:46.091 }, 00:18:46.091 { 00:18:46.091 "name": "BaseBdev4", 00:18:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.091 "is_configured": false, 00:18:46.091 "data_offset": 0, 00:18:46.091 "data_size": 0 00:18:46.091 } 00:18:46.091 ] 00:18:46.091 }' 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.091 09:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.350 [2024-11-06 09:11:45.337441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.350 [2024-11-06 09:11:45.337507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:46.350 [2024-11-06 09:11:45.337518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:46.350 [2024-11-06 09:11:45.337829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:46.350 [2024-11-06 09:11:45.338020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:46.350 [2024-11-06 09:11:45.338035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:46.350 [2024-11-06 09:11:45.338358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.350 BaseBdev4 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:46.350 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.351 [ 00:18:46.351 { 00:18:46.351 "name": "BaseBdev4", 00:18:46.351 "aliases": [ 00:18:46.351 "354f6b30-400e-44ce-b30c-1964ac7c46b9" 00:18:46.351 ], 00:18:46.351 "product_name": "Malloc disk", 00:18:46.351 "block_size": 512, 00:18:46.351 "num_blocks": 65536, 00:18:46.351 "uuid": "354f6b30-400e-44ce-b30c-1964ac7c46b9", 00:18:46.351 "assigned_rate_limits": { 00:18:46.351 "rw_ios_per_sec": 0, 00:18:46.351 "rw_mbytes_per_sec": 0, 00:18:46.351 "r_mbytes_per_sec": 0, 00:18:46.351 "w_mbytes_per_sec": 0 00:18:46.351 }, 00:18:46.351 "claimed": true, 00:18:46.351 "claim_type": "exclusive_write", 00:18:46.351 "zoned": false, 00:18:46.351 "supported_io_types": { 00:18:46.351 "read": true, 00:18:46.351 "write": true, 00:18:46.351 "unmap": true, 00:18:46.351 "flush": true, 00:18:46.351 "reset": true, 00:18:46.351 "nvme_admin": false, 00:18:46.351 "nvme_io": false, 00:18:46.351 "nvme_io_md": false, 00:18:46.351 "write_zeroes": true, 00:18:46.351 "zcopy": true, 00:18:46.351 "get_zone_info": false, 00:18:46.351 "zone_management": false, 00:18:46.351 "zone_append": false, 00:18:46.351 "compare": false, 00:18:46.351 "compare_and_write": false, 00:18:46.351 "abort": true, 00:18:46.351 "seek_hole": false, 00:18:46.351 "seek_data": false, 00:18:46.351 "copy": true, 00:18:46.351 "nvme_iov_md": false 00:18:46.351 }, 00:18:46.351 "memory_domains": [ 00:18:46.351 { 00:18:46.351 "dma_device_id": "system", 00:18:46.351 "dma_device_type": 1 00:18:46.351 }, 00:18:46.351 { 00:18:46.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.351 "dma_device_type": 2 00:18:46.351 } 00:18:46.351 ], 00:18:46.351 "driver_specific": {} 00:18:46.351 } 00:18:46.351 ] 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.351 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.612 "name": "Existed_Raid", 00:18:46.612 "uuid": "a1833dc2-932b-4da1-a55d-1b101c80c78e", 00:18:46.612 "strip_size_kb": 0, 00:18:46.612 "state": "online", 00:18:46.612 "raid_level": "raid1", 00:18:46.612 "superblock": false, 00:18:46.612 "num_base_bdevs": 4, 00:18:46.612 "num_base_bdevs_discovered": 4, 00:18:46.612 "num_base_bdevs_operational": 4, 00:18:46.612 "base_bdevs_list": [ 00:18:46.612 { 00:18:46.612 "name": "BaseBdev1", 00:18:46.612 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:46.612 "is_configured": true, 00:18:46.612 "data_offset": 0, 00:18:46.612 "data_size": 65536 00:18:46.612 }, 00:18:46.612 { 00:18:46.612 "name": "BaseBdev2", 00:18:46.612 "uuid": "af8c4adb-0ece-4c23-834e-e4be5eb30ab6", 00:18:46.612 "is_configured": true, 00:18:46.612 "data_offset": 0, 00:18:46.612 "data_size": 65536 00:18:46.612 }, 00:18:46.612 { 00:18:46.612 "name": "BaseBdev3", 00:18:46.612 "uuid": "8f00ffc6-ec6d-49f3-a69d-1ed0f90c2f54", 00:18:46.612 "is_configured": true, 00:18:46.612 "data_offset": 0, 00:18:46.612 "data_size": 65536 00:18:46.612 }, 00:18:46.612 { 00:18:46.612 "name": "BaseBdev4", 00:18:46.612 "uuid": "354f6b30-400e-44ce-b30c-1964ac7c46b9", 00:18:46.612 "is_configured": true, 00:18:46.612 "data_offset": 0, 00:18:46.612 "data_size": 65536 00:18:46.612 } 00:18:46.612 ] 00:18:46.612 }' 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.612 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.886 [2024-11-06 09:11:45.833745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.886 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.886 "name": "Existed_Raid", 00:18:46.886 "aliases": [ 00:18:46.886 "a1833dc2-932b-4da1-a55d-1b101c80c78e" 00:18:46.886 ], 00:18:46.886 "product_name": "Raid Volume", 00:18:46.886 "block_size": 512, 00:18:46.886 "num_blocks": 65536, 00:18:46.886 "uuid": "a1833dc2-932b-4da1-a55d-1b101c80c78e", 00:18:46.886 "assigned_rate_limits": { 00:18:46.886 "rw_ios_per_sec": 0, 00:18:46.886 "rw_mbytes_per_sec": 0, 00:18:46.886 "r_mbytes_per_sec": 0, 00:18:46.886 "w_mbytes_per_sec": 0 00:18:46.886 }, 00:18:46.886 "claimed": false, 00:18:46.886 "zoned": false, 00:18:46.886 "supported_io_types": { 00:18:46.886 "read": true, 00:18:46.886 "write": true, 00:18:46.886 "unmap": false, 00:18:46.886 "flush": false, 00:18:46.886 "reset": true, 00:18:46.886 "nvme_admin": false, 00:18:46.886 "nvme_io": false, 00:18:46.886 "nvme_io_md": false, 00:18:46.886 "write_zeroes": true, 00:18:46.886 "zcopy": false, 00:18:46.886 "get_zone_info": false, 00:18:46.886 "zone_management": false, 00:18:46.886 "zone_append": false, 00:18:46.886 "compare": false, 00:18:46.886 "compare_and_write": false, 00:18:46.886 "abort": false, 00:18:46.886 "seek_hole": false, 00:18:46.886 "seek_data": false, 00:18:46.886 "copy": false, 00:18:46.886 "nvme_iov_md": false 00:18:46.886 }, 00:18:46.886 "memory_domains": [ 00:18:46.886 { 00:18:46.886 "dma_device_id": "system", 00:18:46.886 "dma_device_type": 1 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.886 "dma_device_type": 2 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "system", 00:18:46.886 "dma_device_type": 1 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.886 "dma_device_type": 2 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "system", 00:18:46.886 "dma_device_type": 1 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.886 "dma_device_type": 2 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "system", 00:18:46.886 "dma_device_type": 1 00:18:46.886 }, 00:18:46.886 { 00:18:46.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.886 "dma_device_type": 2 00:18:46.886 } 00:18:46.886 ], 00:18:46.886 "driver_specific": { 00:18:46.886 "raid": { 00:18:46.886 "uuid": "a1833dc2-932b-4da1-a55d-1b101c80c78e", 00:18:46.886 "strip_size_kb": 0, 00:18:46.886 "state": "online", 00:18:46.886 "raid_level": "raid1", 00:18:46.886 "superblock": false, 00:18:46.886 "num_base_bdevs": 4, 00:18:46.886 "num_base_bdevs_discovered": 4, 00:18:46.886 "num_base_bdevs_operational": 4, 00:18:46.886 "base_bdevs_list": [ 00:18:46.886 { 00:18:46.886 "name": "BaseBdev1", 00:18:46.886 "uuid": "37c505ee-3a34-499b-a858-dd52a9b9066c", 00:18:46.886 "is_configured": true, 00:18:46.886 "data_offset": 0, 00:18:46.886 "data_size": 65536 00:18:46.886 }, 00:18:46.886 { 00:18:46.887 "name": "BaseBdev2", 00:18:46.887 "uuid": "af8c4adb-0ece-4c23-834e-e4be5eb30ab6", 00:18:46.887 "is_configured": true, 00:18:46.887 "data_offset": 0, 00:18:46.887 "data_size": 65536 00:18:46.887 }, 00:18:46.887 { 00:18:46.887 "name": "BaseBdev3", 00:18:46.887 "uuid": "8f00ffc6-ec6d-49f3-a69d-1ed0f90c2f54", 00:18:46.887 "is_configured": true, 00:18:46.887 "data_offset": 0, 00:18:46.887 "data_size": 65536 00:18:46.887 }, 00:18:46.887 { 00:18:46.887 "name": "BaseBdev4", 00:18:46.887 "uuid": "354f6b30-400e-44ce-b30c-1964ac7c46b9", 00:18:46.887 "is_configured": true, 00:18:46.887 "data_offset": 0, 00:18:46.887 "data_size": 65536 00:18:46.887 } 00:18:46.887 ] 00:18:46.887 } 00:18:46.887 } 00:18:46.887 }' 00:18:46.887 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:46.887 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:46.887 BaseBdev2 00:18:46.887 BaseBdev3 00:18:46.887 BaseBdev4' 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 09:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.151 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 [2024-11-06 09:11:46.145454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.410 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.411 "name": "Existed_Raid", 00:18:47.411 "uuid": "a1833dc2-932b-4da1-a55d-1b101c80c78e", 00:18:47.411 "strip_size_kb": 0, 00:18:47.411 "state": "online", 00:18:47.411 "raid_level": "raid1", 00:18:47.411 "superblock": false, 00:18:47.411 "num_base_bdevs": 4, 00:18:47.411 "num_base_bdevs_discovered": 3, 00:18:47.411 "num_base_bdevs_operational": 3, 00:18:47.411 "base_bdevs_list": [ 00:18:47.411 { 00:18:47.411 "name": null, 00:18:47.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.411 "is_configured": false, 00:18:47.411 "data_offset": 0, 00:18:47.411 "data_size": 65536 00:18:47.411 }, 00:18:47.411 { 00:18:47.411 "name": "BaseBdev2", 00:18:47.411 "uuid": "af8c4adb-0ece-4c23-834e-e4be5eb30ab6", 00:18:47.411 "is_configured": true, 00:18:47.411 "data_offset": 0, 00:18:47.411 "data_size": 65536 00:18:47.411 }, 00:18:47.411 { 00:18:47.411 "name": "BaseBdev3", 00:18:47.411 "uuid": "8f00ffc6-ec6d-49f3-a69d-1ed0f90c2f54", 00:18:47.411 "is_configured": true, 00:18:47.411 "data_offset": 0, 00:18:47.411 "data_size": 65536 00:18:47.411 }, 00:18:47.411 { 00:18:47.411 "name": "BaseBdev4", 00:18:47.411 "uuid": "354f6b30-400e-44ce-b30c-1964ac7c46b9", 00:18:47.411 "is_configured": true, 00:18:47.411 "data_offset": 0, 00:18:47.411 "data_size": 65536 00:18:47.411 } 00:18:47.411 ] 00:18:47.411 }' 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.411 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.670 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.670 [2024-11-06 09:11:46.693463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.929 [2024-11-06 09:11:46.841285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:47.929 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.188 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.188 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:48.188 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:48.188 09:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:48.188 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.188 09:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.188 [2024-11-06 09:11:46.992565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:48.188 [2024-11-06 09:11:46.992663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.188 [2024-11-06 09:11:47.089580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.188 [2024-11-06 09:11:47.089631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.188 [2024-11-06 09:11:47.089646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:48.188 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.188 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:48.188 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:48.188 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.188 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.188 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.189 BaseBdev2 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.189 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.189 [ 00:18:48.189 { 00:18:48.189 "name": "BaseBdev2", 00:18:48.189 "aliases": [ 00:18:48.189 "6fa6444a-dce3-4a76-93e3-e33e2f404838" 00:18:48.189 ], 00:18:48.189 "product_name": "Malloc disk", 00:18:48.189 "block_size": 512, 00:18:48.189 "num_blocks": 65536, 00:18:48.189 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:48.189 "assigned_rate_limits": { 00:18:48.189 "rw_ios_per_sec": 0, 00:18:48.189 "rw_mbytes_per_sec": 0, 00:18:48.189 "r_mbytes_per_sec": 0, 00:18:48.189 "w_mbytes_per_sec": 0 00:18:48.189 }, 00:18:48.189 "claimed": false, 00:18:48.189 "zoned": false, 00:18:48.189 "supported_io_types": { 00:18:48.189 "read": true, 00:18:48.189 "write": true, 00:18:48.189 "unmap": true, 00:18:48.189 "flush": true, 00:18:48.189 "reset": true, 00:18:48.189 "nvme_admin": false, 00:18:48.189 "nvme_io": false, 00:18:48.189 "nvme_io_md": false, 00:18:48.189 "write_zeroes": true, 00:18:48.189 "zcopy": true, 00:18:48.189 "get_zone_info": false, 00:18:48.447 "zone_management": false, 00:18:48.447 "zone_append": false, 00:18:48.447 "compare": false, 00:18:48.447 "compare_and_write": false, 00:18:48.447 "abort": true, 00:18:48.447 "seek_hole": false, 00:18:48.447 "seek_data": false, 00:18:48.447 "copy": true, 00:18:48.447 "nvme_iov_md": false 00:18:48.447 }, 00:18:48.447 "memory_domains": [ 00:18:48.447 { 00:18:48.447 "dma_device_id": "system", 00:18:48.447 "dma_device_type": 1 00:18:48.447 }, 00:18:48.447 { 00:18:48.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.447 "dma_device_type": 2 00:18:48.447 } 00:18:48.447 ], 00:18:48.448 "driver_specific": {} 00:18:48.448 } 00:18:48.448 ] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 BaseBdev3 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 [ 00:18:48.448 { 00:18:48.448 "name": "BaseBdev3", 00:18:48.448 "aliases": [ 00:18:48.448 "686eb5e7-9530-4504-ba50-f004a50ee660" 00:18:48.448 ], 00:18:48.448 "product_name": "Malloc disk", 00:18:48.448 "block_size": 512, 00:18:48.448 "num_blocks": 65536, 00:18:48.448 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:48.448 "assigned_rate_limits": { 00:18:48.448 "rw_ios_per_sec": 0, 00:18:48.448 "rw_mbytes_per_sec": 0, 00:18:48.448 "r_mbytes_per_sec": 0, 00:18:48.448 "w_mbytes_per_sec": 0 00:18:48.448 }, 00:18:48.448 "claimed": false, 00:18:48.448 "zoned": false, 00:18:48.448 "supported_io_types": { 00:18:48.448 "read": true, 00:18:48.448 "write": true, 00:18:48.448 "unmap": true, 00:18:48.448 "flush": true, 00:18:48.448 "reset": true, 00:18:48.448 "nvme_admin": false, 00:18:48.448 "nvme_io": false, 00:18:48.448 "nvme_io_md": false, 00:18:48.448 "write_zeroes": true, 00:18:48.448 "zcopy": true, 00:18:48.448 "get_zone_info": false, 00:18:48.448 "zone_management": false, 00:18:48.448 "zone_append": false, 00:18:48.448 "compare": false, 00:18:48.448 "compare_and_write": false, 00:18:48.448 "abort": true, 00:18:48.448 "seek_hole": false, 00:18:48.448 "seek_data": false, 00:18:48.448 "copy": true, 00:18:48.448 "nvme_iov_md": false 00:18:48.448 }, 00:18:48.448 "memory_domains": [ 00:18:48.448 { 00:18:48.448 "dma_device_id": "system", 00:18:48.448 "dma_device_type": 1 00:18:48.448 }, 00:18:48.448 { 00:18:48.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.448 "dma_device_type": 2 00:18:48.448 } 00:18:48.448 ], 00:18:48.448 "driver_specific": {} 00:18:48.448 } 00:18:48.448 ] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 BaseBdev4 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 [ 00:18:48.448 { 00:18:48.448 "name": "BaseBdev4", 00:18:48.448 "aliases": [ 00:18:48.448 "8d0ea64b-a806-47ca-90c3-13d184e224d8" 00:18:48.448 ], 00:18:48.448 "product_name": "Malloc disk", 00:18:48.448 "block_size": 512, 00:18:48.448 "num_blocks": 65536, 00:18:48.448 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:48.448 "assigned_rate_limits": { 00:18:48.448 "rw_ios_per_sec": 0, 00:18:48.448 "rw_mbytes_per_sec": 0, 00:18:48.448 "r_mbytes_per_sec": 0, 00:18:48.448 "w_mbytes_per_sec": 0 00:18:48.448 }, 00:18:48.448 "claimed": false, 00:18:48.448 "zoned": false, 00:18:48.448 "supported_io_types": { 00:18:48.448 "read": true, 00:18:48.448 "write": true, 00:18:48.448 "unmap": true, 00:18:48.448 "flush": true, 00:18:48.448 "reset": true, 00:18:48.448 "nvme_admin": false, 00:18:48.448 "nvme_io": false, 00:18:48.448 "nvme_io_md": false, 00:18:48.448 "write_zeroes": true, 00:18:48.448 "zcopy": true, 00:18:48.448 "get_zone_info": false, 00:18:48.448 "zone_management": false, 00:18:48.448 "zone_append": false, 00:18:48.448 "compare": false, 00:18:48.448 "compare_and_write": false, 00:18:48.448 "abort": true, 00:18:48.448 "seek_hole": false, 00:18:48.448 "seek_data": false, 00:18:48.448 "copy": true, 00:18:48.448 "nvme_iov_md": false 00:18:48.448 }, 00:18:48.448 "memory_domains": [ 00:18:48.448 { 00:18:48.448 "dma_device_id": "system", 00:18:48.448 "dma_device_type": 1 00:18:48.448 }, 00:18:48.448 { 00:18:48.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.448 "dma_device_type": 2 00:18:48.448 } 00:18:48.448 ], 00:18:48.448 "driver_specific": {} 00:18:48.448 } 00:18:48.448 ] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.448 [2024-11-06 09:11:47.429884] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:48.448 [2024-11-06 09:11:47.429932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:48.448 [2024-11-06 09:11:47.429954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.448 [2024-11-06 09:11:47.432029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.448 [2024-11-06 09:11:47.432083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.448 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.449 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.449 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.449 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.449 "name": "Existed_Raid", 00:18:48.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.449 "strip_size_kb": 0, 00:18:48.449 "state": "configuring", 00:18:48.449 "raid_level": "raid1", 00:18:48.449 "superblock": false, 00:18:48.449 "num_base_bdevs": 4, 00:18:48.449 "num_base_bdevs_discovered": 3, 00:18:48.449 "num_base_bdevs_operational": 4, 00:18:48.449 "base_bdevs_list": [ 00:18:48.449 { 00:18:48.449 "name": "BaseBdev1", 00:18:48.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.449 "is_configured": false, 00:18:48.449 "data_offset": 0, 00:18:48.449 "data_size": 0 00:18:48.449 }, 00:18:48.449 { 00:18:48.449 "name": "BaseBdev2", 00:18:48.449 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:48.449 "is_configured": true, 00:18:48.449 "data_offset": 0, 00:18:48.449 "data_size": 65536 00:18:48.449 }, 00:18:48.449 { 00:18:48.449 "name": "BaseBdev3", 00:18:48.449 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:48.449 "is_configured": true, 00:18:48.449 "data_offset": 0, 00:18:48.449 "data_size": 65536 00:18:48.449 }, 00:18:48.449 { 00:18:48.449 "name": "BaseBdev4", 00:18:48.449 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:48.449 "is_configured": true, 00:18:48.449 "data_offset": 0, 00:18:48.449 "data_size": 65536 00:18:48.449 } 00:18:48.449 ] 00:18:48.449 }' 00:18:48.449 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.449 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.018 [2024-11-06 09:11:47.821435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.018 "name": "Existed_Raid", 00:18:49.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.018 "strip_size_kb": 0, 00:18:49.018 "state": "configuring", 00:18:49.018 "raid_level": "raid1", 00:18:49.018 "superblock": false, 00:18:49.018 "num_base_bdevs": 4, 00:18:49.018 "num_base_bdevs_discovered": 2, 00:18:49.018 "num_base_bdevs_operational": 4, 00:18:49.018 "base_bdevs_list": [ 00:18:49.018 { 00:18:49.018 "name": "BaseBdev1", 00:18:49.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.018 "is_configured": false, 00:18:49.018 "data_offset": 0, 00:18:49.018 "data_size": 0 00:18:49.018 }, 00:18:49.018 { 00:18:49.018 "name": null, 00:18:49.018 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:49.018 "is_configured": false, 00:18:49.018 "data_offset": 0, 00:18:49.018 "data_size": 65536 00:18:49.018 }, 00:18:49.018 { 00:18:49.018 "name": "BaseBdev3", 00:18:49.018 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:49.018 "is_configured": true, 00:18:49.018 "data_offset": 0, 00:18:49.018 "data_size": 65536 00:18:49.018 }, 00:18:49.018 { 00:18:49.018 "name": "BaseBdev4", 00:18:49.018 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:49.018 "is_configured": true, 00:18:49.018 "data_offset": 0, 00:18:49.018 "data_size": 65536 00:18:49.018 } 00:18:49.018 ] 00:18:49.018 }' 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.018 09:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.277 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.536 [2024-11-06 09:11:48.340056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.536 BaseBdev1 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.536 [ 00:18:49.536 { 00:18:49.536 "name": "BaseBdev1", 00:18:49.536 "aliases": [ 00:18:49.536 "436072d4-8a0c-4dd4-b29e-d77914bbed59" 00:18:49.536 ], 00:18:49.536 "product_name": "Malloc disk", 00:18:49.536 "block_size": 512, 00:18:49.536 "num_blocks": 65536, 00:18:49.536 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:49.536 "assigned_rate_limits": { 00:18:49.536 "rw_ios_per_sec": 0, 00:18:49.536 "rw_mbytes_per_sec": 0, 00:18:49.536 "r_mbytes_per_sec": 0, 00:18:49.536 "w_mbytes_per_sec": 0 00:18:49.536 }, 00:18:49.536 "claimed": true, 00:18:49.536 "claim_type": "exclusive_write", 00:18:49.536 "zoned": false, 00:18:49.536 "supported_io_types": { 00:18:49.536 "read": true, 00:18:49.536 "write": true, 00:18:49.536 "unmap": true, 00:18:49.536 "flush": true, 00:18:49.536 "reset": true, 00:18:49.536 "nvme_admin": false, 00:18:49.536 "nvme_io": false, 00:18:49.536 "nvme_io_md": false, 00:18:49.536 "write_zeroes": true, 00:18:49.536 "zcopy": true, 00:18:49.536 "get_zone_info": false, 00:18:49.536 "zone_management": false, 00:18:49.536 "zone_append": false, 00:18:49.536 "compare": false, 00:18:49.536 "compare_and_write": false, 00:18:49.536 "abort": true, 00:18:49.536 "seek_hole": false, 00:18:49.536 "seek_data": false, 00:18:49.536 "copy": true, 00:18:49.536 "nvme_iov_md": false 00:18:49.536 }, 00:18:49.536 "memory_domains": [ 00:18:49.536 { 00:18:49.536 "dma_device_id": "system", 00:18:49.536 "dma_device_type": 1 00:18:49.536 }, 00:18:49.536 { 00:18:49.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.536 "dma_device_type": 2 00:18:49.536 } 00:18:49.536 ], 00:18:49.536 "driver_specific": {} 00:18:49.536 } 00:18:49.536 ] 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.536 "name": "Existed_Raid", 00:18:49.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.536 "strip_size_kb": 0, 00:18:49.536 "state": "configuring", 00:18:49.536 "raid_level": "raid1", 00:18:49.536 "superblock": false, 00:18:49.536 "num_base_bdevs": 4, 00:18:49.536 "num_base_bdevs_discovered": 3, 00:18:49.536 "num_base_bdevs_operational": 4, 00:18:49.536 "base_bdevs_list": [ 00:18:49.536 { 00:18:49.536 "name": "BaseBdev1", 00:18:49.536 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:49.536 "is_configured": true, 00:18:49.536 "data_offset": 0, 00:18:49.536 "data_size": 65536 00:18:49.536 }, 00:18:49.536 { 00:18:49.536 "name": null, 00:18:49.536 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:49.536 "is_configured": false, 00:18:49.536 "data_offset": 0, 00:18:49.536 "data_size": 65536 00:18:49.536 }, 00:18:49.536 { 00:18:49.536 "name": "BaseBdev3", 00:18:49.536 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:49.536 "is_configured": true, 00:18:49.536 "data_offset": 0, 00:18:49.536 "data_size": 65536 00:18:49.536 }, 00:18:49.536 { 00:18:49.536 "name": "BaseBdev4", 00:18:49.536 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:49.536 "is_configured": true, 00:18:49.536 "data_offset": 0, 00:18:49.536 "data_size": 65536 00:18:49.536 } 00:18:49.536 ] 00:18:49.536 }' 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.536 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.795 [2024-11-06 09:11:48.787519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.795 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.054 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.054 "name": "Existed_Raid", 00:18:50.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.054 "strip_size_kb": 0, 00:18:50.054 "state": "configuring", 00:18:50.054 "raid_level": "raid1", 00:18:50.054 "superblock": false, 00:18:50.054 "num_base_bdevs": 4, 00:18:50.054 "num_base_bdevs_discovered": 2, 00:18:50.054 "num_base_bdevs_operational": 4, 00:18:50.054 "base_bdevs_list": [ 00:18:50.054 { 00:18:50.054 "name": "BaseBdev1", 00:18:50.054 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:50.054 "is_configured": true, 00:18:50.054 "data_offset": 0, 00:18:50.054 "data_size": 65536 00:18:50.054 }, 00:18:50.054 { 00:18:50.054 "name": null, 00:18:50.054 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:50.054 "is_configured": false, 00:18:50.054 "data_offset": 0, 00:18:50.054 "data_size": 65536 00:18:50.054 }, 00:18:50.054 { 00:18:50.054 "name": null, 00:18:50.054 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:50.054 "is_configured": false, 00:18:50.054 "data_offset": 0, 00:18:50.054 "data_size": 65536 00:18:50.054 }, 00:18:50.054 { 00:18:50.054 "name": "BaseBdev4", 00:18:50.054 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:50.054 "is_configured": true, 00:18:50.054 "data_offset": 0, 00:18:50.054 "data_size": 65536 00:18:50.054 } 00:18:50.054 ] 00:18:50.054 }' 00:18:50.054 09:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.054 09:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 [2024-11-06 09:11:49.191438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.313 "name": "Existed_Raid", 00:18:50.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.313 "strip_size_kb": 0, 00:18:50.313 "state": "configuring", 00:18:50.313 "raid_level": "raid1", 00:18:50.313 "superblock": false, 00:18:50.313 "num_base_bdevs": 4, 00:18:50.313 "num_base_bdevs_discovered": 3, 00:18:50.313 "num_base_bdevs_operational": 4, 00:18:50.313 "base_bdevs_list": [ 00:18:50.313 { 00:18:50.313 "name": "BaseBdev1", 00:18:50.313 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:50.313 "is_configured": true, 00:18:50.313 "data_offset": 0, 00:18:50.313 "data_size": 65536 00:18:50.313 }, 00:18:50.313 { 00:18:50.313 "name": null, 00:18:50.313 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:50.313 "is_configured": false, 00:18:50.313 "data_offset": 0, 00:18:50.313 "data_size": 65536 00:18:50.313 }, 00:18:50.313 { 00:18:50.314 "name": "BaseBdev3", 00:18:50.314 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:50.314 "is_configured": true, 00:18:50.314 "data_offset": 0, 00:18:50.314 "data_size": 65536 00:18:50.314 }, 00:18:50.314 { 00:18:50.314 "name": "BaseBdev4", 00:18:50.314 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:50.314 "is_configured": true, 00:18:50.314 "data_offset": 0, 00:18:50.314 "data_size": 65536 00:18:50.314 } 00:18:50.314 ] 00:18:50.314 }' 00:18:50.314 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.314 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.572 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.572 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.572 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.572 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:50.572 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.831 [2024-11-06 09:11:49.635027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.831 "name": "Existed_Raid", 00:18:50.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.831 "strip_size_kb": 0, 00:18:50.831 "state": "configuring", 00:18:50.831 "raid_level": "raid1", 00:18:50.831 "superblock": false, 00:18:50.831 "num_base_bdevs": 4, 00:18:50.831 "num_base_bdevs_discovered": 2, 00:18:50.831 "num_base_bdevs_operational": 4, 00:18:50.831 "base_bdevs_list": [ 00:18:50.831 { 00:18:50.831 "name": null, 00:18:50.831 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:50.831 "is_configured": false, 00:18:50.831 "data_offset": 0, 00:18:50.831 "data_size": 65536 00:18:50.831 }, 00:18:50.831 { 00:18:50.831 "name": null, 00:18:50.831 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:50.831 "is_configured": false, 00:18:50.831 "data_offset": 0, 00:18:50.831 "data_size": 65536 00:18:50.831 }, 00:18:50.831 { 00:18:50.831 "name": "BaseBdev3", 00:18:50.831 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:50.831 "is_configured": true, 00:18:50.831 "data_offset": 0, 00:18:50.831 "data_size": 65536 00:18:50.831 }, 00:18:50.831 { 00:18:50.831 "name": "BaseBdev4", 00:18:50.831 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:50.831 "is_configured": true, 00:18:50.831 "data_offset": 0, 00:18:50.831 "data_size": 65536 00:18:50.831 } 00:18:50.831 ] 00:18:50.831 }' 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.831 09:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.089 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.089 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.089 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.089 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:51.089 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.355 [2024-11-06 09:11:50.142034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.355 "name": "Existed_Raid", 00:18:51.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.355 "strip_size_kb": 0, 00:18:51.355 "state": "configuring", 00:18:51.355 "raid_level": "raid1", 00:18:51.355 "superblock": false, 00:18:51.355 "num_base_bdevs": 4, 00:18:51.355 "num_base_bdevs_discovered": 3, 00:18:51.355 "num_base_bdevs_operational": 4, 00:18:51.355 "base_bdevs_list": [ 00:18:51.355 { 00:18:51.355 "name": null, 00:18:51.355 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:51.355 "is_configured": false, 00:18:51.355 "data_offset": 0, 00:18:51.355 "data_size": 65536 00:18:51.355 }, 00:18:51.355 { 00:18:51.355 "name": "BaseBdev2", 00:18:51.355 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:51.355 "is_configured": true, 00:18:51.355 "data_offset": 0, 00:18:51.355 "data_size": 65536 00:18:51.355 }, 00:18:51.355 { 00:18:51.355 "name": "BaseBdev3", 00:18:51.355 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:51.355 "is_configured": true, 00:18:51.355 "data_offset": 0, 00:18:51.355 "data_size": 65536 00:18:51.355 }, 00:18:51.355 { 00:18:51.355 "name": "BaseBdev4", 00:18:51.355 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:51.355 "is_configured": true, 00:18:51.355 "data_offset": 0, 00:18:51.355 "data_size": 65536 00:18:51.355 } 00:18:51.355 ] 00:18:51.355 }' 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.355 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 436072d4-8a0c-4dd4-b29e-d77914bbed59 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.621 [2024-11-06 09:11:50.643842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:51.621 [2024-11-06 09:11:50.643897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:51.621 [2024-11-06 09:11:50.643909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:51.621 [2024-11-06 09:11:50.644190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:51.621 [2024-11-06 09:11:50.644369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:51.621 [2024-11-06 09:11:50.644380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:51.621 [2024-11-06 09:11:50.644647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.621 NewBaseBdev 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.621 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.881 [ 00:18:51.881 { 00:18:51.881 "name": "NewBaseBdev", 00:18:51.881 "aliases": [ 00:18:51.881 "436072d4-8a0c-4dd4-b29e-d77914bbed59" 00:18:51.881 ], 00:18:51.881 "product_name": "Malloc disk", 00:18:51.881 "block_size": 512, 00:18:51.881 "num_blocks": 65536, 00:18:51.881 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:51.881 "assigned_rate_limits": { 00:18:51.881 "rw_ios_per_sec": 0, 00:18:51.881 "rw_mbytes_per_sec": 0, 00:18:51.881 "r_mbytes_per_sec": 0, 00:18:51.881 "w_mbytes_per_sec": 0 00:18:51.881 }, 00:18:51.881 "claimed": true, 00:18:51.881 "claim_type": "exclusive_write", 00:18:51.881 "zoned": false, 00:18:51.881 "supported_io_types": { 00:18:51.881 "read": true, 00:18:51.881 "write": true, 00:18:51.881 "unmap": true, 00:18:51.881 "flush": true, 00:18:51.881 "reset": true, 00:18:51.881 "nvme_admin": false, 00:18:51.881 "nvme_io": false, 00:18:51.881 "nvme_io_md": false, 00:18:51.881 "write_zeroes": true, 00:18:51.881 "zcopy": true, 00:18:51.881 "get_zone_info": false, 00:18:51.881 "zone_management": false, 00:18:51.881 "zone_append": false, 00:18:51.881 "compare": false, 00:18:51.881 "compare_and_write": false, 00:18:51.881 "abort": true, 00:18:51.881 "seek_hole": false, 00:18:51.881 "seek_data": false, 00:18:51.881 "copy": true, 00:18:51.881 "nvme_iov_md": false 00:18:51.881 }, 00:18:51.881 "memory_domains": [ 00:18:51.881 { 00:18:51.881 "dma_device_id": "system", 00:18:51.881 "dma_device_type": 1 00:18:51.881 }, 00:18:51.881 { 00:18:51.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.881 "dma_device_type": 2 00:18:51.881 } 00:18:51.881 ], 00:18:51.881 "driver_specific": {} 00:18:51.881 } 00:18:51.881 ] 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.881 "name": "Existed_Raid", 00:18:51.881 "uuid": "f6ab2240-3951-4a97-92b2-894f2ed8b61c", 00:18:51.881 "strip_size_kb": 0, 00:18:51.881 "state": "online", 00:18:51.881 "raid_level": "raid1", 00:18:51.881 "superblock": false, 00:18:51.881 "num_base_bdevs": 4, 00:18:51.881 "num_base_bdevs_discovered": 4, 00:18:51.881 "num_base_bdevs_operational": 4, 00:18:51.881 "base_bdevs_list": [ 00:18:51.881 { 00:18:51.881 "name": "NewBaseBdev", 00:18:51.881 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:51.881 "is_configured": true, 00:18:51.881 "data_offset": 0, 00:18:51.881 "data_size": 65536 00:18:51.881 }, 00:18:51.881 { 00:18:51.881 "name": "BaseBdev2", 00:18:51.881 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:51.881 "is_configured": true, 00:18:51.881 "data_offset": 0, 00:18:51.881 "data_size": 65536 00:18:51.881 }, 00:18:51.881 { 00:18:51.881 "name": "BaseBdev3", 00:18:51.881 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:51.881 "is_configured": true, 00:18:51.881 "data_offset": 0, 00:18:51.881 "data_size": 65536 00:18:51.881 }, 00:18:51.881 { 00:18:51.881 "name": "BaseBdev4", 00:18:51.881 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:51.881 "is_configured": true, 00:18:51.881 "data_offset": 0, 00:18:51.881 "data_size": 65536 00:18:51.881 } 00:18:51.881 ] 00:18:51.881 }' 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.881 09:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.141 [2024-11-06 09:11:51.127739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.141 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:52.141 "name": "Existed_Raid", 00:18:52.141 "aliases": [ 00:18:52.141 "f6ab2240-3951-4a97-92b2-894f2ed8b61c" 00:18:52.141 ], 00:18:52.141 "product_name": "Raid Volume", 00:18:52.141 "block_size": 512, 00:18:52.141 "num_blocks": 65536, 00:18:52.141 "uuid": "f6ab2240-3951-4a97-92b2-894f2ed8b61c", 00:18:52.141 "assigned_rate_limits": { 00:18:52.141 "rw_ios_per_sec": 0, 00:18:52.141 "rw_mbytes_per_sec": 0, 00:18:52.141 "r_mbytes_per_sec": 0, 00:18:52.141 "w_mbytes_per_sec": 0 00:18:52.141 }, 00:18:52.141 "claimed": false, 00:18:52.141 "zoned": false, 00:18:52.141 "supported_io_types": { 00:18:52.141 "read": true, 00:18:52.141 "write": true, 00:18:52.141 "unmap": false, 00:18:52.141 "flush": false, 00:18:52.141 "reset": true, 00:18:52.141 "nvme_admin": false, 00:18:52.141 "nvme_io": false, 00:18:52.141 "nvme_io_md": false, 00:18:52.141 "write_zeroes": true, 00:18:52.141 "zcopy": false, 00:18:52.141 "get_zone_info": false, 00:18:52.141 "zone_management": false, 00:18:52.141 "zone_append": false, 00:18:52.141 "compare": false, 00:18:52.141 "compare_and_write": false, 00:18:52.141 "abort": false, 00:18:52.141 "seek_hole": false, 00:18:52.141 "seek_data": false, 00:18:52.141 "copy": false, 00:18:52.141 "nvme_iov_md": false 00:18:52.141 }, 00:18:52.141 "memory_domains": [ 00:18:52.141 { 00:18:52.141 "dma_device_id": "system", 00:18:52.141 "dma_device_type": 1 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.141 "dma_device_type": 2 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "system", 00:18:52.141 "dma_device_type": 1 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.141 "dma_device_type": 2 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "system", 00:18:52.141 "dma_device_type": 1 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.141 "dma_device_type": 2 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "system", 00:18:52.141 "dma_device_type": 1 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.141 "dma_device_type": 2 00:18:52.141 } 00:18:52.141 ], 00:18:52.141 "driver_specific": { 00:18:52.141 "raid": { 00:18:52.141 "uuid": "f6ab2240-3951-4a97-92b2-894f2ed8b61c", 00:18:52.141 "strip_size_kb": 0, 00:18:52.141 "state": "online", 00:18:52.141 "raid_level": "raid1", 00:18:52.141 "superblock": false, 00:18:52.141 "num_base_bdevs": 4, 00:18:52.141 "num_base_bdevs_discovered": 4, 00:18:52.141 "num_base_bdevs_operational": 4, 00:18:52.141 "base_bdevs_list": [ 00:18:52.141 { 00:18:52.141 "name": "NewBaseBdev", 00:18:52.141 "uuid": "436072d4-8a0c-4dd4-b29e-d77914bbed59", 00:18:52.141 "is_configured": true, 00:18:52.141 "data_offset": 0, 00:18:52.141 "data_size": 65536 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "name": "BaseBdev2", 00:18:52.141 "uuid": "6fa6444a-dce3-4a76-93e3-e33e2f404838", 00:18:52.141 "is_configured": true, 00:18:52.141 "data_offset": 0, 00:18:52.141 "data_size": 65536 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "name": "BaseBdev3", 00:18:52.141 "uuid": "686eb5e7-9530-4504-ba50-f004a50ee660", 00:18:52.141 "is_configured": true, 00:18:52.141 "data_offset": 0, 00:18:52.141 "data_size": 65536 00:18:52.141 }, 00:18:52.141 { 00:18:52.141 "name": "BaseBdev4", 00:18:52.141 "uuid": "8d0ea64b-a806-47ca-90c3-13d184e224d8", 00:18:52.141 "is_configured": true, 00:18:52.141 "data_offset": 0, 00:18:52.141 "data_size": 65536 00:18:52.141 } 00:18:52.141 ] 00:18:52.141 } 00:18:52.142 } 00:18:52.142 }' 00:18:52.142 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:52.401 BaseBdev2 00:18:52.401 BaseBdev3 00:18:52.401 BaseBdev4' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.401 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.660 [2024-11-06 09:11:51.443115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.660 [2024-11-06 09:11:51.443147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.660 [2024-11-06 09:11:51.443248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.660 [2024-11-06 09:11:51.443551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.660 [2024-11-06 09:11:51.443570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72930 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 72930 ']' 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 72930 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72930 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:52.660 killing process with pid 72930 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72930' 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 72930 00:18:52.660 [2024-11-06 09:11:51.493628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.660 09:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 72930 00:18:52.919 [2024-11-06 09:11:51.892713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:54.295 00:18:54.295 real 0m10.880s 00:18:54.295 user 0m17.042s 00:18:54.295 sys 0m2.223s 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:54.295 ************************************ 00:18:54.295 END TEST raid_state_function_test 00:18:54.295 ************************************ 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.295 09:11:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:54.295 09:11:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:54.295 09:11:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:54.295 09:11:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.295 ************************************ 00:18:54.295 START TEST raid_state_function_test_sb 00:18:54.295 ************************************ 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73590 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73590' 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:54.295 Process raid pid: 73590 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73590 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73590 ']' 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.295 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.296 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.296 09:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.296 [2024-11-06 09:11:53.191093] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:18:54.296 [2024-11-06 09:11:53.191222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.554 [2024-11-06 09:11:53.375441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.554 [2024-11-06 09:11:53.498556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.813 [2024-11-06 09:11:53.710438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.813 [2024-11-06 09:11:53.710483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.072 [2024-11-06 09:11:54.020998] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.072 [2024-11-06 09:11:54.021058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.072 [2024-11-06 09:11:54.021070] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.072 [2024-11-06 09:11:54.021083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.072 [2024-11-06 09:11:54.021092] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.072 [2024-11-06 09:11:54.021104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.072 [2024-11-06 09:11:54.021111] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:55.072 [2024-11-06 09:11:54.021123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.072 "name": "Existed_Raid", 00:18:55.072 "uuid": "466fb3d7-8cda-4ffa-84f0-6055c3df088b", 00:18:55.072 "strip_size_kb": 0, 00:18:55.072 "state": "configuring", 00:18:55.072 "raid_level": "raid1", 00:18:55.072 "superblock": true, 00:18:55.072 "num_base_bdevs": 4, 00:18:55.072 "num_base_bdevs_discovered": 0, 00:18:55.072 "num_base_bdevs_operational": 4, 00:18:55.072 "base_bdevs_list": [ 00:18:55.072 { 00:18:55.072 "name": "BaseBdev1", 00:18:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.072 "is_configured": false, 00:18:55.072 "data_offset": 0, 00:18:55.072 "data_size": 0 00:18:55.072 }, 00:18:55.072 { 00:18:55.072 "name": "BaseBdev2", 00:18:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.072 "is_configured": false, 00:18:55.072 "data_offset": 0, 00:18:55.072 "data_size": 0 00:18:55.072 }, 00:18:55.072 { 00:18:55.072 "name": "BaseBdev3", 00:18:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.072 "is_configured": false, 00:18:55.072 "data_offset": 0, 00:18:55.072 "data_size": 0 00:18:55.072 }, 00:18:55.072 { 00:18:55.072 "name": "BaseBdev4", 00:18:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.072 "is_configured": false, 00:18:55.072 "data_offset": 0, 00:18:55.072 "data_size": 0 00:18:55.072 } 00:18:55.072 ] 00:18:55.072 }' 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.072 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.639 [2024-11-06 09:11:54.436513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.639 [2024-11-06 09:11:54.436571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.639 [2024-11-06 09:11:54.448518] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.639 [2024-11-06 09:11:54.448728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.639 [2024-11-06 09:11:54.448888] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.639 [2024-11-06 09:11:54.448962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.639 [2024-11-06 09:11:54.449088] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.639 [2024-11-06 09:11:54.449241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.639 [2024-11-06 09:11:54.449373] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:55.639 [2024-11-06 09:11:54.449439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.639 [2024-11-06 09:11:54.496746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.639 BaseBdev1 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.639 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.639 [ 00:18:55.639 { 00:18:55.639 "name": "BaseBdev1", 00:18:55.639 "aliases": [ 00:18:55.639 "8005895f-2c02-4de5-be5b-28a9e9b2bbe9" 00:18:55.639 ], 00:18:55.639 "product_name": "Malloc disk", 00:18:55.639 "block_size": 512, 00:18:55.639 "num_blocks": 65536, 00:18:55.639 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:55.639 "assigned_rate_limits": { 00:18:55.639 "rw_ios_per_sec": 0, 00:18:55.639 "rw_mbytes_per_sec": 0, 00:18:55.639 "r_mbytes_per_sec": 0, 00:18:55.639 "w_mbytes_per_sec": 0 00:18:55.639 }, 00:18:55.639 "claimed": true, 00:18:55.639 "claim_type": "exclusive_write", 00:18:55.639 "zoned": false, 00:18:55.639 "supported_io_types": { 00:18:55.639 "read": true, 00:18:55.639 "write": true, 00:18:55.639 "unmap": true, 00:18:55.639 "flush": true, 00:18:55.639 "reset": true, 00:18:55.639 "nvme_admin": false, 00:18:55.639 "nvme_io": false, 00:18:55.639 "nvme_io_md": false, 00:18:55.639 "write_zeroes": true, 00:18:55.639 "zcopy": true, 00:18:55.639 "get_zone_info": false, 00:18:55.639 "zone_management": false, 00:18:55.639 "zone_append": false, 00:18:55.639 "compare": false, 00:18:55.639 "compare_and_write": false, 00:18:55.639 "abort": true, 00:18:55.639 "seek_hole": false, 00:18:55.639 "seek_data": false, 00:18:55.639 "copy": true, 00:18:55.639 "nvme_iov_md": false 00:18:55.639 }, 00:18:55.639 "memory_domains": [ 00:18:55.640 { 00:18:55.640 "dma_device_id": "system", 00:18:55.640 "dma_device_type": 1 00:18:55.640 }, 00:18:55.640 { 00:18:55.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.640 "dma_device_type": 2 00:18:55.640 } 00:18:55.640 ], 00:18:55.640 "driver_specific": {} 00:18:55.640 } 00:18:55.640 ] 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.640 "name": "Existed_Raid", 00:18:55.640 "uuid": "1e70da22-3be1-4092-b7ef-c1cf673e2986", 00:18:55.640 "strip_size_kb": 0, 00:18:55.640 "state": "configuring", 00:18:55.640 "raid_level": "raid1", 00:18:55.640 "superblock": true, 00:18:55.640 "num_base_bdevs": 4, 00:18:55.640 "num_base_bdevs_discovered": 1, 00:18:55.640 "num_base_bdevs_operational": 4, 00:18:55.640 "base_bdevs_list": [ 00:18:55.640 { 00:18:55.640 "name": "BaseBdev1", 00:18:55.640 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:55.640 "is_configured": true, 00:18:55.640 "data_offset": 2048, 00:18:55.640 "data_size": 63488 00:18:55.640 }, 00:18:55.640 { 00:18:55.640 "name": "BaseBdev2", 00:18:55.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.640 "is_configured": false, 00:18:55.640 "data_offset": 0, 00:18:55.640 "data_size": 0 00:18:55.640 }, 00:18:55.640 { 00:18:55.640 "name": "BaseBdev3", 00:18:55.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.640 "is_configured": false, 00:18:55.640 "data_offset": 0, 00:18:55.640 "data_size": 0 00:18:55.640 }, 00:18:55.640 { 00:18:55.640 "name": "BaseBdev4", 00:18:55.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.640 "is_configured": false, 00:18:55.640 "data_offset": 0, 00:18:55.640 "data_size": 0 00:18:55.640 } 00:18:55.640 ] 00:18:55.640 }' 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.640 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.207 [2024-11-06 09:11:54.992375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.207 [2024-11-06 09:11:54.992435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.207 09:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.207 [2024-11-06 09:11:55.004421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.207 [2024-11-06 09:11:55.006510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.207 [2024-11-06 09:11:55.006699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.207 [2024-11-06 09:11:55.006722] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.207 [2024-11-06 09:11:55.006741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.207 [2024-11-06 09:11:55.006750] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:56.207 [2024-11-06 09:11:55.006763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:56.207 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.207 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:56.207 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:56.207 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.208 "name": "Existed_Raid", 00:18:56.208 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:56.208 "strip_size_kb": 0, 00:18:56.208 "state": "configuring", 00:18:56.208 "raid_level": "raid1", 00:18:56.208 "superblock": true, 00:18:56.208 "num_base_bdevs": 4, 00:18:56.208 "num_base_bdevs_discovered": 1, 00:18:56.208 "num_base_bdevs_operational": 4, 00:18:56.208 "base_bdevs_list": [ 00:18:56.208 { 00:18:56.208 "name": "BaseBdev1", 00:18:56.208 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:56.208 "is_configured": true, 00:18:56.208 "data_offset": 2048, 00:18:56.208 "data_size": 63488 00:18:56.208 }, 00:18:56.208 { 00:18:56.208 "name": "BaseBdev2", 00:18:56.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.208 "is_configured": false, 00:18:56.208 "data_offset": 0, 00:18:56.208 "data_size": 0 00:18:56.208 }, 00:18:56.208 { 00:18:56.208 "name": "BaseBdev3", 00:18:56.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.208 "is_configured": false, 00:18:56.208 "data_offset": 0, 00:18:56.208 "data_size": 0 00:18:56.208 }, 00:18:56.208 { 00:18:56.208 "name": "BaseBdev4", 00:18:56.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.208 "is_configured": false, 00:18:56.208 "data_offset": 0, 00:18:56.208 "data_size": 0 00:18:56.208 } 00:18:56.208 ] 00:18:56.208 }' 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.208 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.468 [2024-11-06 09:11:55.444319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.468 BaseBdev2 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.468 [ 00:18:56.468 { 00:18:56.468 "name": "BaseBdev2", 00:18:56.468 "aliases": [ 00:18:56.468 "a3b5beae-2c11-4e43-981a-b22129cfbf4f" 00:18:56.468 ], 00:18:56.468 "product_name": "Malloc disk", 00:18:56.468 "block_size": 512, 00:18:56.468 "num_blocks": 65536, 00:18:56.468 "uuid": "a3b5beae-2c11-4e43-981a-b22129cfbf4f", 00:18:56.468 "assigned_rate_limits": { 00:18:56.468 "rw_ios_per_sec": 0, 00:18:56.468 "rw_mbytes_per_sec": 0, 00:18:56.468 "r_mbytes_per_sec": 0, 00:18:56.468 "w_mbytes_per_sec": 0 00:18:56.468 }, 00:18:56.468 "claimed": true, 00:18:56.468 "claim_type": "exclusive_write", 00:18:56.468 "zoned": false, 00:18:56.468 "supported_io_types": { 00:18:56.468 "read": true, 00:18:56.468 "write": true, 00:18:56.468 "unmap": true, 00:18:56.468 "flush": true, 00:18:56.468 "reset": true, 00:18:56.468 "nvme_admin": false, 00:18:56.468 "nvme_io": false, 00:18:56.468 "nvme_io_md": false, 00:18:56.468 "write_zeroes": true, 00:18:56.468 "zcopy": true, 00:18:56.468 "get_zone_info": false, 00:18:56.468 "zone_management": false, 00:18:56.468 "zone_append": false, 00:18:56.468 "compare": false, 00:18:56.468 "compare_and_write": false, 00:18:56.468 "abort": true, 00:18:56.468 "seek_hole": false, 00:18:56.468 "seek_data": false, 00:18:56.468 "copy": true, 00:18:56.468 "nvme_iov_md": false 00:18:56.468 }, 00:18:56.468 "memory_domains": [ 00:18:56.468 { 00:18:56.468 "dma_device_id": "system", 00:18:56.468 "dma_device_type": 1 00:18:56.468 }, 00:18:56.468 { 00:18:56.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.468 "dma_device_type": 2 00:18:56.468 } 00:18:56.468 ], 00:18:56.468 "driver_specific": {} 00:18:56.468 } 00:18:56.468 ] 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.468 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.726 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.726 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.726 "name": "Existed_Raid", 00:18:56.726 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:56.726 "strip_size_kb": 0, 00:18:56.726 "state": "configuring", 00:18:56.726 "raid_level": "raid1", 00:18:56.726 "superblock": true, 00:18:56.727 "num_base_bdevs": 4, 00:18:56.727 "num_base_bdevs_discovered": 2, 00:18:56.727 "num_base_bdevs_operational": 4, 00:18:56.727 "base_bdevs_list": [ 00:18:56.727 { 00:18:56.727 "name": "BaseBdev1", 00:18:56.727 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:56.727 "is_configured": true, 00:18:56.727 "data_offset": 2048, 00:18:56.727 "data_size": 63488 00:18:56.727 }, 00:18:56.727 { 00:18:56.727 "name": "BaseBdev2", 00:18:56.727 "uuid": "a3b5beae-2c11-4e43-981a-b22129cfbf4f", 00:18:56.727 "is_configured": true, 00:18:56.727 "data_offset": 2048, 00:18:56.727 "data_size": 63488 00:18:56.727 }, 00:18:56.727 { 00:18:56.727 "name": "BaseBdev3", 00:18:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.727 "is_configured": false, 00:18:56.727 "data_offset": 0, 00:18:56.727 "data_size": 0 00:18:56.727 }, 00:18:56.727 { 00:18:56.727 "name": "BaseBdev4", 00:18:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.727 "is_configured": false, 00:18:56.727 "data_offset": 0, 00:18:56.727 "data_size": 0 00:18:56.727 } 00:18:56.727 ] 00:18:56.727 }' 00:18:56.727 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.727 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.986 [2024-11-06 09:11:55.988474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.986 BaseBdev3 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.986 09:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.986 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.986 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:56.986 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.986 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.986 [ 00:18:56.986 { 00:18:56.986 "name": "BaseBdev3", 00:18:56.986 "aliases": [ 00:18:56.986 "fbf73f89-6961-43ac-aafb-3a0b10c2e1a7" 00:18:56.986 ], 00:18:56.986 "product_name": "Malloc disk", 00:18:56.986 "block_size": 512, 00:18:56.986 "num_blocks": 65536, 00:18:56.986 "uuid": "fbf73f89-6961-43ac-aafb-3a0b10c2e1a7", 00:18:56.986 "assigned_rate_limits": { 00:18:56.986 "rw_ios_per_sec": 0, 00:18:56.986 "rw_mbytes_per_sec": 0, 00:18:56.986 "r_mbytes_per_sec": 0, 00:18:56.986 "w_mbytes_per_sec": 0 00:18:56.986 }, 00:18:56.986 "claimed": true, 00:18:56.986 "claim_type": "exclusive_write", 00:18:56.986 "zoned": false, 00:18:57.244 "supported_io_types": { 00:18:57.244 "read": true, 00:18:57.244 "write": true, 00:18:57.244 "unmap": true, 00:18:57.244 "flush": true, 00:18:57.244 "reset": true, 00:18:57.244 "nvme_admin": false, 00:18:57.244 "nvme_io": false, 00:18:57.244 "nvme_io_md": false, 00:18:57.244 "write_zeroes": true, 00:18:57.244 "zcopy": true, 00:18:57.244 "get_zone_info": false, 00:18:57.244 "zone_management": false, 00:18:57.244 "zone_append": false, 00:18:57.244 "compare": false, 00:18:57.244 "compare_and_write": false, 00:18:57.244 "abort": true, 00:18:57.244 "seek_hole": false, 00:18:57.244 "seek_data": false, 00:18:57.244 "copy": true, 00:18:57.244 "nvme_iov_md": false 00:18:57.244 }, 00:18:57.244 "memory_domains": [ 00:18:57.244 { 00:18:57.244 "dma_device_id": "system", 00:18:57.244 "dma_device_type": 1 00:18:57.244 }, 00:18:57.244 { 00:18:57.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.244 "dma_device_type": 2 00:18:57.244 } 00:18:57.244 ], 00:18:57.244 "driver_specific": {} 00:18:57.244 } 00:18:57.244 ] 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.244 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.244 "name": "Existed_Raid", 00:18:57.244 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:57.244 "strip_size_kb": 0, 00:18:57.244 "state": "configuring", 00:18:57.244 "raid_level": "raid1", 00:18:57.244 "superblock": true, 00:18:57.244 "num_base_bdevs": 4, 00:18:57.244 "num_base_bdevs_discovered": 3, 00:18:57.244 "num_base_bdevs_operational": 4, 00:18:57.244 "base_bdevs_list": [ 00:18:57.244 { 00:18:57.244 "name": "BaseBdev1", 00:18:57.244 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:57.244 "is_configured": true, 00:18:57.244 "data_offset": 2048, 00:18:57.244 "data_size": 63488 00:18:57.244 }, 00:18:57.244 { 00:18:57.244 "name": "BaseBdev2", 00:18:57.244 "uuid": "a3b5beae-2c11-4e43-981a-b22129cfbf4f", 00:18:57.244 "is_configured": true, 00:18:57.244 "data_offset": 2048, 00:18:57.244 "data_size": 63488 00:18:57.244 }, 00:18:57.244 { 00:18:57.244 "name": "BaseBdev3", 00:18:57.245 "uuid": "fbf73f89-6961-43ac-aafb-3a0b10c2e1a7", 00:18:57.245 "is_configured": true, 00:18:57.245 "data_offset": 2048, 00:18:57.245 "data_size": 63488 00:18:57.245 }, 00:18:57.245 { 00:18:57.245 "name": "BaseBdev4", 00:18:57.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.245 "is_configured": false, 00:18:57.245 "data_offset": 0, 00:18:57.245 "data_size": 0 00:18:57.245 } 00:18:57.245 ] 00:18:57.245 }' 00:18:57.245 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.245 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.503 [2024-11-06 09:11:56.472519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:57.503 [2024-11-06 09:11:56.472805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:57.503 [2024-11-06 09:11:56.472822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:57.503 [2024-11-06 09:11:56.473120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:57.503 BaseBdev4 00:18:57.503 [2024-11-06 09:11:56.473296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:57.503 [2024-11-06 09:11:56.473315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:57.503 [2024-11-06 09:11:56.473459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.503 [ 00:18:57.503 { 00:18:57.503 "name": "BaseBdev4", 00:18:57.503 "aliases": [ 00:18:57.503 "9d1fb389-ed01-4582-8183-d1f2c7facdc6" 00:18:57.503 ], 00:18:57.503 "product_name": "Malloc disk", 00:18:57.503 "block_size": 512, 00:18:57.503 "num_blocks": 65536, 00:18:57.503 "uuid": "9d1fb389-ed01-4582-8183-d1f2c7facdc6", 00:18:57.503 "assigned_rate_limits": { 00:18:57.503 "rw_ios_per_sec": 0, 00:18:57.503 "rw_mbytes_per_sec": 0, 00:18:57.503 "r_mbytes_per_sec": 0, 00:18:57.503 "w_mbytes_per_sec": 0 00:18:57.503 }, 00:18:57.503 "claimed": true, 00:18:57.503 "claim_type": "exclusive_write", 00:18:57.503 "zoned": false, 00:18:57.503 "supported_io_types": { 00:18:57.503 "read": true, 00:18:57.503 "write": true, 00:18:57.503 "unmap": true, 00:18:57.503 "flush": true, 00:18:57.503 "reset": true, 00:18:57.503 "nvme_admin": false, 00:18:57.503 "nvme_io": false, 00:18:57.503 "nvme_io_md": false, 00:18:57.503 "write_zeroes": true, 00:18:57.503 "zcopy": true, 00:18:57.503 "get_zone_info": false, 00:18:57.503 "zone_management": false, 00:18:57.503 "zone_append": false, 00:18:57.503 "compare": false, 00:18:57.503 "compare_and_write": false, 00:18:57.503 "abort": true, 00:18:57.503 "seek_hole": false, 00:18:57.503 "seek_data": false, 00:18:57.503 "copy": true, 00:18:57.503 "nvme_iov_md": false 00:18:57.503 }, 00:18:57.503 "memory_domains": [ 00:18:57.503 { 00:18:57.503 "dma_device_id": "system", 00:18:57.503 "dma_device_type": 1 00:18:57.503 }, 00:18:57.503 { 00:18:57.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.503 "dma_device_type": 2 00:18:57.503 } 00:18:57.503 ], 00:18:57.503 "driver_specific": {} 00:18:57.503 } 00:18:57.503 ] 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:57.503 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.504 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.763 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.763 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.763 "name": "Existed_Raid", 00:18:57.763 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:57.763 "strip_size_kb": 0, 00:18:57.763 "state": "online", 00:18:57.763 "raid_level": "raid1", 00:18:57.763 "superblock": true, 00:18:57.763 "num_base_bdevs": 4, 00:18:57.763 "num_base_bdevs_discovered": 4, 00:18:57.763 "num_base_bdevs_operational": 4, 00:18:57.763 "base_bdevs_list": [ 00:18:57.763 { 00:18:57.763 "name": "BaseBdev1", 00:18:57.763 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:57.763 "is_configured": true, 00:18:57.763 "data_offset": 2048, 00:18:57.763 "data_size": 63488 00:18:57.763 }, 00:18:57.763 { 00:18:57.763 "name": "BaseBdev2", 00:18:57.763 "uuid": "a3b5beae-2c11-4e43-981a-b22129cfbf4f", 00:18:57.763 "is_configured": true, 00:18:57.763 "data_offset": 2048, 00:18:57.763 "data_size": 63488 00:18:57.763 }, 00:18:57.763 { 00:18:57.763 "name": "BaseBdev3", 00:18:57.763 "uuid": "fbf73f89-6961-43ac-aafb-3a0b10c2e1a7", 00:18:57.763 "is_configured": true, 00:18:57.763 "data_offset": 2048, 00:18:57.763 "data_size": 63488 00:18:57.763 }, 00:18:57.763 { 00:18:57.763 "name": "BaseBdev4", 00:18:57.763 "uuid": "9d1fb389-ed01-4582-8183-d1f2c7facdc6", 00:18:57.763 "is_configured": true, 00:18:57.763 "data_offset": 2048, 00:18:57.763 "data_size": 63488 00:18:57.763 } 00:18:57.763 ] 00:18:57.763 }' 00:18:57.763 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.763 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.137 [2024-11-06 09:11:56.944497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.137 "name": "Existed_Raid", 00:18:58.137 "aliases": [ 00:18:58.137 "62554503-7917-4b07-bc49-16c656bd1163" 00:18:58.137 ], 00:18:58.137 "product_name": "Raid Volume", 00:18:58.137 "block_size": 512, 00:18:58.137 "num_blocks": 63488, 00:18:58.137 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:58.137 "assigned_rate_limits": { 00:18:58.137 "rw_ios_per_sec": 0, 00:18:58.137 "rw_mbytes_per_sec": 0, 00:18:58.137 "r_mbytes_per_sec": 0, 00:18:58.137 "w_mbytes_per_sec": 0 00:18:58.137 }, 00:18:58.137 "claimed": false, 00:18:58.137 "zoned": false, 00:18:58.137 "supported_io_types": { 00:18:58.137 "read": true, 00:18:58.137 "write": true, 00:18:58.137 "unmap": false, 00:18:58.137 "flush": false, 00:18:58.137 "reset": true, 00:18:58.137 "nvme_admin": false, 00:18:58.137 "nvme_io": false, 00:18:58.137 "nvme_io_md": false, 00:18:58.137 "write_zeroes": true, 00:18:58.137 "zcopy": false, 00:18:58.137 "get_zone_info": false, 00:18:58.137 "zone_management": false, 00:18:58.137 "zone_append": false, 00:18:58.137 "compare": false, 00:18:58.137 "compare_and_write": false, 00:18:58.137 "abort": false, 00:18:58.137 "seek_hole": false, 00:18:58.137 "seek_data": false, 00:18:58.137 "copy": false, 00:18:58.137 "nvme_iov_md": false 00:18:58.137 }, 00:18:58.137 "memory_domains": [ 00:18:58.137 { 00:18:58.137 "dma_device_id": "system", 00:18:58.137 "dma_device_type": 1 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.137 "dma_device_type": 2 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "system", 00:18:58.137 "dma_device_type": 1 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.137 "dma_device_type": 2 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "system", 00:18:58.137 "dma_device_type": 1 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.137 "dma_device_type": 2 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "system", 00:18:58.137 "dma_device_type": 1 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.137 "dma_device_type": 2 00:18:58.137 } 00:18:58.137 ], 00:18:58.137 "driver_specific": { 00:18:58.137 "raid": { 00:18:58.137 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:58.137 "strip_size_kb": 0, 00:18:58.137 "state": "online", 00:18:58.137 "raid_level": "raid1", 00:18:58.137 "superblock": true, 00:18:58.137 "num_base_bdevs": 4, 00:18:58.137 "num_base_bdevs_discovered": 4, 00:18:58.137 "num_base_bdevs_operational": 4, 00:18:58.137 "base_bdevs_list": [ 00:18:58.137 { 00:18:58.137 "name": "BaseBdev1", 00:18:58.137 "uuid": "8005895f-2c02-4de5-be5b-28a9e9b2bbe9", 00:18:58.137 "is_configured": true, 00:18:58.137 "data_offset": 2048, 00:18:58.137 "data_size": 63488 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "name": "BaseBdev2", 00:18:58.137 "uuid": "a3b5beae-2c11-4e43-981a-b22129cfbf4f", 00:18:58.137 "is_configured": true, 00:18:58.137 "data_offset": 2048, 00:18:58.137 "data_size": 63488 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "name": "BaseBdev3", 00:18:58.137 "uuid": "fbf73f89-6961-43ac-aafb-3a0b10c2e1a7", 00:18:58.137 "is_configured": true, 00:18:58.137 "data_offset": 2048, 00:18:58.137 "data_size": 63488 00:18:58.137 }, 00:18:58.137 { 00:18:58.137 "name": "BaseBdev4", 00:18:58.137 "uuid": "9d1fb389-ed01-4582-8183-d1f2c7facdc6", 00:18:58.137 "is_configured": true, 00:18:58.137 "data_offset": 2048, 00:18:58.137 "data_size": 63488 00:18:58.137 } 00:18:58.137 ] 00:18:58.137 } 00:18:58.137 } 00:18:58.137 }' 00:18:58.137 09:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:58.137 BaseBdev2 00:18:58.137 BaseBdev3 00:18:58.137 BaseBdev4' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.137 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 [2024-11-06 09:11:57.271736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.396 "name": "Existed_Raid", 00:18:58.396 "uuid": "62554503-7917-4b07-bc49-16c656bd1163", 00:18:58.396 "strip_size_kb": 0, 00:18:58.396 "state": "online", 00:18:58.396 "raid_level": "raid1", 00:18:58.396 "superblock": true, 00:18:58.396 "num_base_bdevs": 4, 00:18:58.396 "num_base_bdevs_discovered": 3, 00:18:58.396 "num_base_bdevs_operational": 3, 00:18:58.396 "base_bdevs_list": [ 00:18:58.396 { 00:18:58.396 "name": null, 00:18:58.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.396 "is_configured": false, 00:18:58.396 "data_offset": 0, 00:18:58.396 "data_size": 63488 00:18:58.396 }, 00:18:58.396 { 00:18:58.396 "name": "BaseBdev2", 00:18:58.396 "uuid": "a3b5beae-2c11-4e43-981a-b22129cfbf4f", 00:18:58.396 "is_configured": true, 00:18:58.396 "data_offset": 2048, 00:18:58.396 "data_size": 63488 00:18:58.396 }, 00:18:58.396 { 00:18:58.396 "name": "BaseBdev3", 00:18:58.396 "uuid": "fbf73f89-6961-43ac-aafb-3a0b10c2e1a7", 00:18:58.396 "is_configured": true, 00:18:58.396 "data_offset": 2048, 00:18:58.396 "data_size": 63488 00:18:58.396 }, 00:18:58.396 { 00:18:58.396 "name": "BaseBdev4", 00:18:58.396 "uuid": "9d1fb389-ed01-4582-8183-d1f2c7facdc6", 00:18:58.396 "is_configured": true, 00:18:58.396 "data_offset": 2048, 00:18:58.396 "data_size": 63488 00:18:58.396 } 00:18:58.396 ] 00:18:58.396 }' 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.396 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.963 [2024-11-06 09:11:57.848573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.963 09:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.221 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:59.221 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.222 [2024-11-06 09:11:58.008251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.222 [2024-11-06 09:11:58.160939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:59.222 [2024-11-06 09:11:58.161044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.222 [2024-11-06 09:11:58.259124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.222 [2024-11-06 09:11:58.259380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.222 [2024-11-06 09:11:58.259412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:59.222 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.481 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:59.481 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:59.481 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 BaseBdev2 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 [ 00:18:59.482 { 00:18:59.482 "name": "BaseBdev2", 00:18:59.482 "aliases": [ 00:18:59.482 "3d377a93-b88e-471a-b4b3-2fb15e675565" 00:18:59.482 ], 00:18:59.482 "product_name": "Malloc disk", 00:18:59.482 "block_size": 512, 00:18:59.482 "num_blocks": 65536, 00:18:59.482 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:18:59.482 "assigned_rate_limits": { 00:18:59.482 "rw_ios_per_sec": 0, 00:18:59.482 "rw_mbytes_per_sec": 0, 00:18:59.482 "r_mbytes_per_sec": 0, 00:18:59.482 "w_mbytes_per_sec": 0 00:18:59.482 }, 00:18:59.482 "claimed": false, 00:18:59.482 "zoned": false, 00:18:59.482 "supported_io_types": { 00:18:59.482 "read": true, 00:18:59.482 "write": true, 00:18:59.482 "unmap": true, 00:18:59.482 "flush": true, 00:18:59.482 "reset": true, 00:18:59.482 "nvme_admin": false, 00:18:59.482 "nvme_io": false, 00:18:59.482 "nvme_io_md": false, 00:18:59.482 "write_zeroes": true, 00:18:59.482 "zcopy": true, 00:18:59.482 "get_zone_info": false, 00:18:59.482 "zone_management": false, 00:18:59.482 "zone_append": false, 00:18:59.482 "compare": false, 00:18:59.482 "compare_and_write": false, 00:18:59.482 "abort": true, 00:18:59.482 "seek_hole": false, 00:18:59.482 "seek_data": false, 00:18:59.482 "copy": true, 00:18:59.482 "nvme_iov_md": false 00:18:59.482 }, 00:18:59.482 "memory_domains": [ 00:18:59.482 { 00:18:59.482 "dma_device_id": "system", 00:18:59.482 "dma_device_type": 1 00:18:59.482 }, 00:18:59.482 { 00:18:59.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.482 "dma_device_type": 2 00:18:59.482 } 00:18:59.482 ], 00:18:59.482 "driver_specific": {} 00:18:59.482 } 00:18:59.482 ] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 BaseBdev3 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 [ 00:18:59.482 { 00:18:59.482 "name": "BaseBdev3", 00:18:59.482 "aliases": [ 00:18:59.482 "ba5cb889-a1e0-4844-a4ee-769227242069" 00:18:59.482 ], 00:18:59.482 "product_name": "Malloc disk", 00:18:59.482 "block_size": 512, 00:18:59.482 "num_blocks": 65536, 00:18:59.482 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:18:59.482 "assigned_rate_limits": { 00:18:59.482 "rw_ios_per_sec": 0, 00:18:59.482 "rw_mbytes_per_sec": 0, 00:18:59.482 "r_mbytes_per_sec": 0, 00:18:59.482 "w_mbytes_per_sec": 0 00:18:59.482 }, 00:18:59.482 "claimed": false, 00:18:59.482 "zoned": false, 00:18:59.482 "supported_io_types": { 00:18:59.482 "read": true, 00:18:59.482 "write": true, 00:18:59.482 "unmap": true, 00:18:59.482 "flush": true, 00:18:59.482 "reset": true, 00:18:59.482 "nvme_admin": false, 00:18:59.482 "nvme_io": false, 00:18:59.482 "nvme_io_md": false, 00:18:59.482 "write_zeroes": true, 00:18:59.482 "zcopy": true, 00:18:59.482 "get_zone_info": false, 00:18:59.482 "zone_management": false, 00:18:59.482 "zone_append": false, 00:18:59.482 "compare": false, 00:18:59.482 "compare_and_write": false, 00:18:59.482 "abort": true, 00:18:59.482 "seek_hole": false, 00:18:59.482 "seek_data": false, 00:18:59.482 "copy": true, 00:18:59.482 "nvme_iov_md": false 00:18:59.482 }, 00:18:59.482 "memory_domains": [ 00:18:59.482 { 00:18:59.482 "dma_device_id": "system", 00:18:59.482 "dma_device_type": 1 00:18:59.482 }, 00:18:59.482 { 00:18:59.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.482 "dma_device_type": 2 00:18:59.482 } 00:18:59.482 ], 00:18:59.482 "driver_specific": {} 00:18:59.482 } 00:18:59.482 ] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.482 BaseBdev4 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.482 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.741 [ 00:18:59.741 { 00:18:59.741 "name": "BaseBdev4", 00:18:59.741 "aliases": [ 00:18:59.741 "80e0b088-b994-4b24-9f4a-ac9b1236936f" 00:18:59.741 ], 00:18:59.741 "product_name": "Malloc disk", 00:18:59.741 "block_size": 512, 00:18:59.741 "num_blocks": 65536, 00:18:59.741 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:18:59.741 "assigned_rate_limits": { 00:18:59.741 "rw_ios_per_sec": 0, 00:18:59.741 "rw_mbytes_per_sec": 0, 00:18:59.741 "r_mbytes_per_sec": 0, 00:18:59.741 "w_mbytes_per_sec": 0 00:18:59.741 }, 00:18:59.741 "claimed": false, 00:18:59.741 "zoned": false, 00:18:59.741 "supported_io_types": { 00:18:59.741 "read": true, 00:18:59.741 "write": true, 00:18:59.741 "unmap": true, 00:18:59.741 "flush": true, 00:18:59.741 "reset": true, 00:18:59.741 "nvme_admin": false, 00:18:59.741 "nvme_io": false, 00:18:59.741 "nvme_io_md": false, 00:18:59.741 "write_zeroes": true, 00:18:59.741 "zcopy": true, 00:18:59.741 "get_zone_info": false, 00:18:59.741 "zone_management": false, 00:18:59.741 "zone_append": false, 00:18:59.741 "compare": false, 00:18:59.741 "compare_and_write": false, 00:18:59.741 "abort": true, 00:18:59.741 "seek_hole": false, 00:18:59.741 "seek_data": false, 00:18:59.741 "copy": true, 00:18:59.741 "nvme_iov_md": false 00:18:59.741 }, 00:18:59.741 "memory_domains": [ 00:18:59.741 { 00:18:59.741 "dma_device_id": "system", 00:18:59.741 "dma_device_type": 1 00:18:59.741 }, 00:18:59.741 { 00:18:59.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.741 "dma_device_type": 2 00:18:59.741 } 00:18:59.741 ], 00:18:59.741 "driver_specific": {} 00:18:59.741 } 00:18:59.741 ] 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.741 [2024-11-06 09:11:58.562969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.741 [2024-11-06 09:11:58.563022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.741 [2024-11-06 09:11:58.563044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.741 [2024-11-06 09:11:58.565149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:59.741 [2024-11-06 09:11:58.565347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.741 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.741 "name": "Existed_Raid", 00:18:59.741 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:18:59.741 "strip_size_kb": 0, 00:18:59.741 "state": "configuring", 00:18:59.741 "raid_level": "raid1", 00:18:59.741 "superblock": true, 00:18:59.741 "num_base_bdevs": 4, 00:18:59.741 "num_base_bdevs_discovered": 3, 00:18:59.741 "num_base_bdevs_operational": 4, 00:18:59.741 "base_bdevs_list": [ 00:18:59.741 { 00:18:59.741 "name": "BaseBdev1", 00:18:59.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.741 "is_configured": false, 00:18:59.741 "data_offset": 0, 00:18:59.741 "data_size": 0 00:18:59.741 }, 00:18:59.741 { 00:18:59.741 "name": "BaseBdev2", 00:18:59.741 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:18:59.741 "is_configured": true, 00:18:59.742 "data_offset": 2048, 00:18:59.742 "data_size": 63488 00:18:59.742 }, 00:18:59.742 { 00:18:59.742 "name": "BaseBdev3", 00:18:59.742 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:18:59.742 "is_configured": true, 00:18:59.742 "data_offset": 2048, 00:18:59.742 "data_size": 63488 00:18:59.742 }, 00:18:59.742 { 00:18:59.742 "name": "BaseBdev4", 00:18:59.742 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:18:59.742 "is_configured": true, 00:18:59.742 "data_offset": 2048, 00:18:59.742 "data_size": 63488 00:18:59.742 } 00:18:59.742 ] 00:18:59.742 }' 00:18:59.742 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.742 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 [2024-11-06 09:11:58.954438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.000 09:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.000 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.000 "name": "Existed_Raid", 00:19:00.000 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:00.000 "strip_size_kb": 0, 00:19:00.000 "state": "configuring", 00:19:00.000 "raid_level": "raid1", 00:19:00.000 "superblock": true, 00:19:00.000 "num_base_bdevs": 4, 00:19:00.000 "num_base_bdevs_discovered": 2, 00:19:00.001 "num_base_bdevs_operational": 4, 00:19:00.001 "base_bdevs_list": [ 00:19:00.001 { 00:19:00.001 "name": "BaseBdev1", 00:19:00.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.001 "is_configured": false, 00:19:00.001 "data_offset": 0, 00:19:00.001 "data_size": 0 00:19:00.001 }, 00:19:00.001 { 00:19:00.001 "name": null, 00:19:00.001 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:00.001 "is_configured": false, 00:19:00.001 "data_offset": 0, 00:19:00.001 "data_size": 63488 00:19:00.001 }, 00:19:00.001 { 00:19:00.001 "name": "BaseBdev3", 00:19:00.001 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:00.001 "is_configured": true, 00:19:00.001 "data_offset": 2048, 00:19:00.001 "data_size": 63488 00:19:00.001 }, 00:19:00.001 { 00:19:00.001 "name": "BaseBdev4", 00:19:00.001 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:00.001 "is_configured": true, 00:19:00.001 "data_offset": 2048, 00:19:00.001 "data_size": 63488 00:19:00.001 } 00:19:00.001 ] 00:19:00.001 }' 00:19:00.001 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.001 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 [2024-11-06 09:11:59.454082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.566 BaseBdev1 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 [ 00:19:00.566 { 00:19:00.566 "name": "BaseBdev1", 00:19:00.566 "aliases": [ 00:19:00.566 "70319057-4717-46b6-8ff6-9e753909b6fa" 00:19:00.566 ], 00:19:00.566 "product_name": "Malloc disk", 00:19:00.566 "block_size": 512, 00:19:00.566 "num_blocks": 65536, 00:19:00.566 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:00.566 "assigned_rate_limits": { 00:19:00.566 "rw_ios_per_sec": 0, 00:19:00.566 "rw_mbytes_per_sec": 0, 00:19:00.566 "r_mbytes_per_sec": 0, 00:19:00.566 "w_mbytes_per_sec": 0 00:19:00.566 }, 00:19:00.566 "claimed": true, 00:19:00.566 "claim_type": "exclusive_write", 00:19:00.566 "zoned": false, 00:19:00.566 "supported_io_types": { 00:19:00.566 "read": true, 00:19:00.566 "write": true, 00:19:00.566 "unmap": true, 00:19:00.566 "flush": true, 00:19:00.566 "reset": true, 00:19:00.566 "nvme_admin": false, 00:19:00.566 "nvme_io": false, 00:19:00.566 "nvme_io_md": false, 00:19:00.566 "write_zeroes": true, 00:19:00.566 "zcopy": true, 00:19:00.566 "get_zone_info": false, 00:19:00.566 "zone_management": false, 00:19:00.566 "zone_append": false, 00:19:00.566 "compare": false, 00:19:00.566 "compare_and_write": false, 00:19:00.566 "abort": true, 00:19:00.566 "seek_hole": false, 00:19:00.566 "seek_data": false, 00:19:00.566 "copy": true, 00:19:00.566 "nvme_iov_md": false 00:19:00.566 }, 00:19:00.566 "memory_domains": [ 00:19:00.566 { 00:19:00.566 "dma_device_id": "system", 00:19:00.566 "dma_device_type": 1 00:19:00.566 }, 00:19:00.566 { 00:19:00.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.566 "dma_device_type": 2 00:19:00.566 } 00:19:00.566 ], 00:19:00.566 "driver_specific": {} 00:19:00.566 } 00:19:00.566 ] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.566 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.566 "name": "Existed_Raid", 00:19:00.567 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:00.567 "strip_size_kb": 0, 00:19:00.567 "state": "configuring", 00:19:00.567 "raid_level": "raid1", 00:19:00.567 "superblock": true, 00:19:00.567 "num_base_bdevs": 4, 00:19:00.567 "num_base_bdevs_discovered": 3, 00:19:00.567 "num_base_bdevs_operational": 4, 00:19:00.567 "base_bdevs_list": [ 00:19:00.567 { 00:19:00.567 "name": "BaseBdev1", 00:19:00.567 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:00.567 "is_configured": true, 00:19:00.567 "data_offset": 2048, 00:19:00.567 "data_size": 63488 00:19:00.567 }, 00:19:00.567 { 00:19:00.567 "name": null, 00:19:00.567 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:00.567 "is_configured": false, 00:19:00.567 "data_offset": 0, 00:19:00.567 "data_size": 63488 00:19:00.567 }, 00:19:00.567 { 00:19:00.567 "name": "BaseBdev3", 00:19:00.567 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:00.567 "is_configured": true, 00:19:00.567 "data_offset": 2048, 00:19:00.567 "data_size": 63488 00:19:00.567 }, 00:19:00.567 { 00:19:00.567 "name": "BaseBdev4", 00:19:00.567 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:00.567 "is_configured": true, 00:19:00.567 "data_offset": 2048, 00:19:00.567 "data_size": 63488 00:19:00.567 } 00:19:00.567 ] 00:19:00.567 }' 00:19:00.567 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.567 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.132 [2024-11-06 09:11:59.986051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.132 09:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.132 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.132 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.132 "name": "Existed_Raid", 00:19:01.132 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:01.132 "strip_size_kb": 0, 00:19:01.132 "state": "configuring", 00:19:01.132 "raid_level": "raid1", 00:19:01.132 "superblock": true, 00:19:01.132 "num_base_bdevs": 4, 00:19:01.132 "num_base_bdevs_discovered": 2, 00:19:01.132 "num_base_bdevs_operational": 4, 00:19:01.132 "base_bdevs_list": [ 00:19:01.132 { 00:19:01.132 "name": "BaseBdev1", 00:19:01.132 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:01.132 "is_configured": true, 00:19:01.132 "data_offset": 2048, 00:19:01.132 "data_size": 63488 00:19:01.132 }, 00:19:01.132 { 00:19:01.132 "name": null, 00:19:01.132 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:01.133 "is_configured": false, 00:19:01.133 "data_offset": 0, 00:19:01.133 "data_size": 63488 00:19:01.133 }, 00:19:01.133 { 00:19:01.133 "name": null, 00:19:01.133 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:01.133 "is_configured": false, 00:19:01.133 "data_offset": 0, 00:19:01.133 "data_size": 63488 00:19:01.133 }, 00:19:01.133 { 00:19:01.133 "name": "BaseBdev4", 00:19:01.133 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:01.133 "is_configured": true, 00:19:01.133 "data_offset": 2048, 00:19:01.133 "data_size": 63488 00:19:01.133 } 00:19:01.133 ] 00:19:01.133 }' 00:19:01.133 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.133 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.391 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.391 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:01.391 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.391 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.651 [2024-11-06 09:12:00.474059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.651 "name": "Existed_Raid", 00:19:01.651 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:01.651 "strip_size_kb": 0, 00:19:01.651 "state": "configuring", 00:19:01.651 "raid_level": "raid1", 00:19:01.651 "superblock": true, 00:19:01.651 "num_base_bdevs": 4, 00:19:01.651 "num_base_bdevs_discovered": 3, 00:19:01.651 "num_base_bdevs_operational": 4, 00:19:01.651 "base_bdevs_list": [ 00:19:01.651 { 00:19:01.651 "name": "BaseBdev1", 00:19:01.651 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:01.651 "is_configured": true, 00:19:01.651 "data_offset": 2048, 00:19:01.651 "data_size": 63488 00:19:01.651 }, 00:19:01.651 { 00:19:01.651 "name": null, 00:19:01.651 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:01.651 "is_configured": false, 00:19:01.651 "data_offset": 0, 00:19:01.651 "data_size": 63488 00:19:01.651 }, 00:19:01.651 { 00:19:01.651 "name": "BaseBdev3", 00:19:01.651 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:01.651 "is_configured": true, 00:19:01.651 "data_offset": 2048, 00:19:01.651 "data_size": 63488 00:19:01.651 }, 00:19:01.651 { 00:19:01.651 "name": "BaseBdev4", 00:19:01.651 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:01.651 "is_configured": true, 00:19:01.651 "data_offset": 2048, 00:19:01.651 "data_size": 63488 00:19:01.651 } 00:19:01.651 ] 00:19:01.651 }' 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.651 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.910 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.910 [2024-11-06 09:12:00.890081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.168 09:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.168 "name": "Existed_Raid", 00:19:02.168 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:02.168 "strip_size_kb": 0, 00:19:02.168 "state": "configuring", 00:19:02.168 "raid_level": "raid1", 00:19:02.168 "superblock": true, 00:19:02.168 "num_base_bdevs": 4, 00:19:02.168 "num_base_bdevs_discovered": 2, 00:19:02.168 "num_base_bdevs_operational": 4, 00:19:02.168 "base_bdevs_list": [ 00:19:02.168 { 00:19:02.168 "name": null, 00:19:02.168 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:02.168 "is_configured": false, 00:19:02.168 "data_offset": 0, 00:19:02.168 "data_size": 63488 00:19:02.168 }, 00:19:02.168 { 00:19:02.168 "name": null, 00:19:02.168 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:02.168 "is_configured": false, 00:19:02.168 "data_offset": 0, 00:19:02.168 "data_size": 63488 00:19:02.168 }, 00:19:02.168 { 00:19:02.168 "name": "BaseBdev3", 00:19:02.168 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:02.168 "is_configured": true, 00:19:02.168 "data_offset": 2048, 00:19:02.168 "data_size": 63488 00:19:02.168 }, 00:19:02.168 { 00:19:02.168 "name": "BaseBdev4", 00:19:02.168 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:02.168 "is_configured": true, 00:19:02.168 "data_offset": 2048, 00:19:02.168 "data_size": 63488 00:19:02.168 } 00:19:02.168 ] 00:19:02.168 }' 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.168 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.426 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:02.426 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.426 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.426 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.684 [2024-11-06 09:12:01.498088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.684 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.684 "name": "Existed_Raid", 00:19:02.684 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:02.684 "strip_size_kb": 0, 00:19:02.684 "state": "configuring", 00:19:02.685 "raid_level": "raid1", 00:19:02.685 "superblock": true, 00:19:02.685 "num_base_bdevs": 4, 00:19:02.685 "num_base_bdevs_discovered": 3, 00:19:02.685 "num_base_bdevs_operational": 4, 00:19:02.685 "base_bdevs_list": [ 00:19:02.685 { 00:19:02.685 "name": null, 00:19:02.685 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:02.685 "is_configured": false, 00:19:02.685 "data_offset": 0, 00:19:02.685 "data_size": 63488 00:19:02.685 }, 00:19:02.685 { 00:19:02.685 "name": "BaseBdev2", 00:19:02.685 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:02.685 "is_configured": true, 00:19:02.685 "data_offset": 2048, 00:19:02.685 "data_size": 63488 00:19:02.685 }, 00:19:02.685 { 00:19:02.685 "name": "BaseBdev3", 00:19:02.685 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:02.685 "is_configured": true, 00:19:02.685 "data_offset": 2048, 00:19:02.685 "data_size": 63488 00:19:02.685 }, 00:19:02.685 { 00:19:02.685 "name": "BaseBdev4", 00:19:02.685 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:02.685 "is_configured": true, 00:19:02.685 "data_offset": 2048, 00:19:02.685 "data_size": 63488 00:19:02.685 } 00:19:02.685 ] 00:19:02.685 }' 00:19:02.685 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.685 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.943 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.943 09:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.943 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.943 09:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 70319057-4717-46b6-8ff6-9e753909b6fa 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.200 [2024-11-06 09:12:02.107114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:03.200 [2024-11-06 09:12:02.107396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:03.200 [2024-11-06 09:12:02.107418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:03.200 [2024-11-06 09:12:02.107713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:03.200 NewBaseBdev 00:19:03.200 [2024-11-06 09:12:02.107879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:03.200 [2024-11-06 09:12:02.107896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:03.200 [2024-11-06 09:12:02.108040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.200 [ 00:19:03.200 { 00:19:03.200 "name": "NewBaseBdev", 00:19:03.200 "aliases": [ 00:19:03.200 "70319057-4717-46b6-8ff6-9e753909b6fa" 00:19:03.200 ], 00:19:03.200 "product_name": "Malloc disk", 00:19:03.200 "block_size": 512, 00:19:03.200 "num_blocks": 65536, 00:19:03.200 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:03.200 "assigned_rate_limits": { 00:19:03.200 "rw_ios_per_sec": 0, 00:19:03.200 "rw_mbytes_per_sec": 0, 00:19:03.200 "r_mbytes_per_sec": 0, 00:19:03.200 "w_mbytes_per_sec": 0 00:19:03.200 }, 00:19:03.200 "claimed": true, 00:19:03.200 "claim_type": "exclusive_write", 00:19:03.200 "zoned": false, 00:19:03.200 "supported_io_types": { 00:19:03.200 "read": true, 00:19:03.200 "write": true, 00:19:03.200 "unmap": true, 00:19:03.200 "flush": true, 00:19:03.200 "reset": true, 00:19:03.200 "nvme_admin": false, 00:19:03.200 "nvme_io": false, 00:19:03.200 "nvme_io_md": false, 00:19:03.200 "write_zeroes": true, 00:19:03.200 "zcopy": true, 00:19:03.200 "get_zone_info": false, 00:19:03.200 "zone_management": false, 00:19:03.200 "zone_append": false, 00:19:03.200 "compare": false, 00:19:03.200 "compare_and_write": false, 00:19:03.200 "abort": true, 00:19:03.200 "seek_hole": false, 00:19:03.200 "seek_data": false, 00:19:03.200 "copy": true, 00:19:03.200 "nvme_iov_md": false 00:19:03.200 }, 00:19:03.200 "memory_domains": [ 00:19:03.200 { 00:19:03.200 "dma_device_id": "system", 00:19:03.200 "dma_device_type": 1 00:19:03.200 }, 00:19:03.200 { 00:19:03.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.200 "dma_device_type": 2 00:19:03.200 } 00:19:03.200 ], 00:19:03.200 "driver_specific": {} 00:19:03.200 } 00:19:03.200 ] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.200 "name": "Existed_Raid", 00:19:03.200 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:03.200 "strip_size_kb": 0, 00:19:03.200 "state": "online", 00:19:03.200 "raid_level": "raid1", 00:19:03.200 "superblock": true, 00:19:03.200 "num_base_bdevs": 4, 00:19:03.200 "num_base_bdevs_discovered": 4, 00:19:03.200 "num_base_bdevs_operational": 4, 00:19:03.200 "base_bdevs_list": [ 00:19:03.200 { 00:19:03.200 "name": "NewBaseBdev", 00:19:03.200 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:03.200 "is_configured": true, 00:19:03.200 "data_offset": 2048, 00:19:03.200 "data_size": 63488 00:19:03.200 }, 00:19:03.200 { 00:19:03.200 "name": "BaseBdev2", 00:19:03.200 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:03.200 "is_configured": true, 00:19:03.200 "data_offset": 2048, 00:19:03.200 "data_size": 63488 00:19:03.200 }, 00:19:03.200 { 00:19:03.200 "name": "BaseBdev3", 00:19:03.200 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:03.200 "is_configured": true, 00:19:03.200 "data_offset": 2048, 00:19:03.200 "data_size": 63488 00:19:03.200 }, 00:19:03.200 { 00:19:03.200 "name": "BaseBdev4", 00:19:03.200 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:03.200 "is_configured": true, 00:19:03.200 "data_offset": 2048, 00:19:03.200 "data_size": 63488 00:19:03.200 } 00:19:03.200 ] 00:19:03.200 }' 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.200 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.764 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.764 [2024-11-06 09:12:02.594861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.765 "name": "Existed_Raid", 00:19:03.765 "aliases": [ 00:19:03.765 "a9cdc759-2f8e-4ef5-8af8-e811b19117cf" 00:19:03.765 ], 00:19:03.765 "product_name": "Raid Volume", 00:19:03.765 "block_size": 512, 00:19:03.765 "num_blocks": 63488, 00:19:03.765 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:03.765 "assigned_rate_limits": { 00:19:03.765 "rw_ios_per_sec": 0, 00:19:03.765 "rw_mbytes_per_sec": 0, 00:19:03.765 "r_mbytes_per_sec": 0, 00:19:03.765 "w_mbytes_per_sec": 0 00:19:03.765 }, 00:19:03.765 "claimed": false, 00:19:03.765 "zoned": false, 00:19:03.765 "supported_io_types": { 00:19:03.765 "read": true, 00:19:03.765 "write": true, 00:19:03.765 "unmap": false, 00:19:03.765 "flush": false, 00:19:03.765 "reset": true, 00:19:03.765 "nvme_admin": false, 00:19:03.765 "nvme_io": false, 00:19:03.765 "nvme_io_md": false, 00:19:03.765 "write_zeroes": true, 00:19:03.765 "zcopy": false, 00:19:03.765 "get_zone_info": false, 00:19:03.765 "zone_management": false, 00:19:03.765 "zone_append": false, 00:19:03.765 "compare": false, 00:19:03.765 "compare_and_write": false, 00:19:03.765 "abort": false, 00:19:03.765 "seek_hole": false, 00:19:03.765 "seek_data": false, 00:19:03.765 "copy": false, 00:19:03.765 "nvme_iov_md": false 00:19:03.765 }, 00:19:03.765 "memory_domains": [ 00:19:03.765 { 00:19:03.765 "dma_device_id": "system", 00:19:03.765 "dma_device_type": 1 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.765 "dma_device_type": 2 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "system", 00:19:03.765 "dma_device_type": 1 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.765 "dma_device_type": 2 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "system", 00:19:03.765 "dma_device_type": 1 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.765 "dma_device_type": 2 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "system", 00:19:03.765 "dma_device_type": 1 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.765 "dma_device_type": 2 00:19:03.765 } 00:19:03.765 ], 00:19:03.765 "driver_specific": { 00:19:03.765 "raid": { 00:19:03.765 "uuid": "a9cdc759-2f8e-4ef5-8af8-e811b19117cf", 00:19:03.765 "strip_size_kb": 0, 00:19:03.765 "state": "online", 00:19:03.765 "raid_level": "raid1", 00:19:03.765 "superblock": true, 00:19:03.765 "num_base_bdevs": 4, 00:19:03.765 "num_base_bdevs_discovered": 4, 00:19:03.765 "num_base_bdevs_operational": 4, 00:19:03.765 "base_bdevs_list": [ 00:19:03.765 { 00:19:03.765 "name": "NewBaseBdev", 00:19:03.765 "uuid": "70319057-4717-46b6-8ff6-9e753909b6fa", 00:19:03.765 "is_configured": true, 00:19:03.765 "data_offset": 2048, 00:19:03.765 "data_size": 63488 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "name": "BaseBdev2", 00:19:03.765 "uuid": "3d377a93-b88e-471a-b4b3-2fb15e675565", 00:19:03.765 "is_configured": true, 00:19:03.765 "data_offset": 2048, 00:19:03.765 "data_size": 63488 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "name": "BaseBdev3", 00:19:03.765 "uuid": "ba5cb889-a1e0-4844-a4ee-769227242069", 00:19:03.765 "is_configured": true, 00:19:03.765 "data_offset": 2048, 00:19:03.765 "data_size": 63488 00:19:03.765 }, 00:19:03.765 { 00:19:03.765 "name": "BaseBdev4", 00:19:03.765 "uuid": "80e0b088-b994-4b24-9f4a-ac9b1236936f", 00:19:03.765 "is_configured": true, 00:19:03.765 "data_offset": 2048, 00:19:03.765 "data_size": 63488 00:19:03.765 } 00:19:03.765 ] 00:19:03.765 } 00:19:03.765 } 00:19:03.765 }' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:03.765 BaseBdev2 00:19:03.765 BaseBdev3 00:19:03.765 BaseBdev4' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.765 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.023 [2024-11-06 09:12:02.910257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.023 [2024-11-06 09:12:02.910310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.023 [2024-11-06 09:12:02.910406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.023 [2024-11-06 09:12:02.910717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.023 [2024-11-06 09:12:02.910736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73590 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73590 ']' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73590 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73590 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:04.023 killing process with pid 73590 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73590' 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73590 00:19:04.023 [2024-11-06 09:12:02.951564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.023 09:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73590 00:19:04.617 [2024-11-06 09:12:03.385785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.554 ************************************ 00:19:05.554 END TEST raid_state_function_test_sb 00:19:05.554 ************************************ 00:19:05.554 09:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:05.554 00:19:05.554 real 0m11.462s 00:19:05.554 user 0m18.177s 00:19:05.554 sys 0m2.247s 00:19:05.554 09:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:05.554 09:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.813 09:12:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:05.813 09:12:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:05.813 09:12:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:05.813 09:12:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.813 ************************************ 00:19:05.813 START TEST raid_superblock_test 00:19:05.813 ************************************ 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74260 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74260 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74260 ']' 00:19:05.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:05.813 09:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.813 [2024-11-06 09:12:04.716656] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:19:05.813 [2024-11-06 09:12:04.716781] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74260 ] 00:19:06.071 [2024-11-06 09:12:04.882951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.071 [2024-11-06 09:12:05.002663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.329 [2024-11-06 09:12:05.214417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.329 [2024-11-06 09:12:05.214485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.588 malloc1 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.588 [2024-11-06 09:12:05.597743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.588 [2024-11-06 09:12:05.597818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.588 [2024-11-06 09:12:05.597849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:06.588 [2024-11-06 09:12:05.597866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.588 [2024-11-06 09:12:05.600586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.588 [2024-11-06 09:12:05.600629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.588 pt1 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.588 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 malloc2 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 [2024-11-06 09:12:05.656695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.847 [2024-11-06 09:12:05.657329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.847 [2024-11-06 09:12:05.657540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:06.847 [2024-11-06 09:12:05.657695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.847 [2024-11-06 09:12:05.660951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.847 [2024-11-06 09:12:05.661105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.847 pt2 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 malloc3 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 [2024-11-06 09:12:05.731804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:06.847 [2024-11-06 09:12:05.732001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.847 [2024-11-06 09:12:05.732064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:06.847 [2024-11-06 09:12:05.732086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.847 [2024-11-06 09:12:05.734679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.847 [2024-11-06 09:12:05.734721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:06.847 pt3 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:06.847 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.848 malloc4 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.848 [2024-11-06 09:12:05.789747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:06.848 [2024-11-06 09:12:05.789930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.848 [2024-11-06 09:12:05.790109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:06.848 [2024-11-06 09:12:05.790219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.848 [2024-11-06 09:12:05.792749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.848 [2024-11-06 09:12:05.792890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:06.848 pt4 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.848 [2024-11-06 09:12:05.801851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.848 [2024-11-06 09:12:05.804146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.848 [2024-11-06 09:12:05.804360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:06.848 [2024-11-06 09:12:05.804430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:06.848 [2024-11-06 09:12:05.804649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:06.848 [2024-11-06 09:12:05.804671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:06.848 [2024-11-06 09:12:05.804966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:06.848 [2024-11-06 09:12:05.805148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:06.848 [2024-11-06 09:12:05.805167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:06.848 [2024-11-06 09:12:05.805348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.848 "name": "raid_bdev1", 00:19:06.848 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:06.848 "strip_size_kb": 0, 00:19:06.848 "state": "online", 00:19:06.848 "raid_level": "raid1", 00:19:06.848 "superblock": true, 00:19:06.848 "num_base_bdevs": 4, 00:19:06.848 "num_base_bdevs_discovered": 4, 00:19:06.848 "num_base_bdevs_operational": 4, 00:19:06.848 "base_bdevs_list": [ 00:19:06.848 { 00:19:06.848 "name": "pt1", 00:19:06.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.848 "is_configured": true, 00:19:06.848 "data_offset": 2048, 00:19:06.848 "data_size": 63488 00:19:06.848 }, 00:19:06.848 { 00:19:06.848 "name": "pt2", 00:19:06.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.848 "is_configured": true, 00:19:06.848 "data_offset": 2048, 00:19:06.848 "data_size": 63488 00:19:06.848 }, 00:19:06.848 { 00:19:06.848 "name": "pt3", 00:19:06.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.848 "is_configured": true, 00:19:06.848 "data_offset": 2048, 00:19:06.848 "data_size": 63488 00:19:06.848 }, 00:19:06.848 { 00:19:06.848 "name": "pt4", 00:19:06.848 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.848 "is_configured": true, 00:19:06.848 "data_offset": 2048, 00:19:06.848 "data_size": 63488 00:19:06.848 } 00:19:06.848 ] 00:19:06.848 }' 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.848 09:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.415 [2024-11-06 09:12:06.261540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.415 "name": "raid_bdev1", 00:19:07.415 "aliases": [ 00:19:07.415 "12f987cf-891a-45a0-840d-21c159cb3b3a" 00:19:07.415 ], 00:19:07.415 "product_name": "Raid Volume", 00:19:07.415 "block_size": 512, 00:19:07.415 "num_blocks": 63488, 00:19:07.415 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:07.415 "assigned_rate_limits": { 00:19:07.415 "rw_ios_per_sec": 0, 00:19:07.415 "rw_mbytes_per_sec": 0, 00:19:07.415 "r_mbytes_per_sec": 0, 00:19:07.415 "w_mbytes_per_sec": 0 00:19:07.415 }, 00:19:07.415 "claimed": false, 00:19:07.415 "zoned": false, 00:19:07.415 "supported_io_types": { 00:19:07.415 "read": true, 00:19:07.415 "write": true, 00:19:07.415 "unmap": false, 00:19:07.415 "flush": false, 00:19:07.415 "reset": true, 00:19:07.415 "nvme_admin": false, 00:19:07.415 "nvme_io": false, 00:19:07.415 "nvme_io_md": false, 00:19:07.415 "write_zeroes": true, 00:19:07.415 "zcopy": false, 00:19:07.415 "get_zone_info": false, 00:19:07.415 "zone_management": false, 00:19:07.415 "zone_append": false, 00:19:07.415 "compare": false, 00:19:07.415 "compare_and_write": false, 00:19:07.415 "abort": false, 00:19:07.415 "seek_hole": false, 00:19:07.415 "seek_data": false, 00:19:07.415 "copy": false, 00:19:07.415 "nvme_iov_md": false 00:19:07.415 }, 00:19:07.415 "memory_domains": [ 00:19:07.415 { 00:19:07.415 "dma_device_id": "system", 00:19:07.415 "dma_device_type": 1 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.415 "dma_device_type": 2 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "system", 00:19:07.415 "dma_device_type": 1 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.415 "dma_device_type": 2 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "system", 00:19:07.415 "dma_device_type": 1 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.415 "dma_device_type": 2 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "system", 00:19:07.415 "dma_device_type": 1 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.415 "dma_device_type": 2 00:19:07.415 } 00:19:07.415 ], 00:19:07.415 "driver_specific": { 00:19:07.415 "raid": { 00:19:07.415 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:07.415 "strip_size_kb": 0, 00:19:07.415 "state": "online", 00:19:07.415 "raid_level": "raid1", 00:19:07.415 "superblock": true, 00:19:07.415 "num_base_bdevs": 4, 00:19:07.415 "num_base_bdevs_discovered": 4, 00:19:07.415 "num_base_bdevs_operational": 4, 00:19:07.415 "base_bdevs_list": [ 00:19:07.415 { 00:19:07.415 "name": "pt1", 00:19:07.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.415 "is_configured": true, 00:19:07.415 "data_offset": 2048, 00:19:07.415 "data_size": 63488 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "name": "pt2", 00:19:07.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.415 "is_configured": true, 00:19:07.415 "data_offset": 2048, 00:19:07.415 "data_size": 63488 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "name": "pt3", 00:19:07.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.415 "is_configured": true, 00:19:07.415 "data_offset": 2048, 00:19:07.415 "data_size": 63488 00:19:07.415 }, 00:19:07.415 { 00:19:07.415 "name": "pt4", 00:19:07.415 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.415 "is_configured": true, 00:19:07.415 "data_offset": 2048, 00:19:07.415 "data_size": 63488 00:19:07.415 } 00:19:07.415 ] 00:19:07.415 } 00:19:07.415 } 00:19:07.415 }' 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:07.415 pt2 00:19:07.415 pt3 00:19:07.415 pt4' 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:07.415 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.416 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.674 [2024-11-06 09:12:06.600990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=12f987cf-891a-45a0-840d-21c159cb3b3a 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 12f987cf-891a-45a0-840d-21c159cb3b3a ']' 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.674 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.674 [2024-11-06 09:12:06.644652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.675 [2024-11-06 09:12:06.644680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.675 [2024-11-06 09:12:06.644765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.675 [2024-11-06 09:12:06.644850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.675 [2024-11-06 09:12:06.644868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.675 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 [2024-11-06 09:12:06.808435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:07.934 [2024-11-06 09:12:06.810727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:07.934 [2024-11-06 09:12:06.810923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:07.934 [2024-11-06 09:12:06.810973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:07.934 [2024-11-06 09:12:06.811029] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:07.934 [2024-11-06 09:12:06.811094] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:07.934 [2024-11-06 09:12:06.811119] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:07.934 [2024-11-06 09:12:06.811143] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:07.934 [2024-11-06 09:12:06.811160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.934 [2024-11-06 09:12:06.811175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:07.934 request: 00:19:07.934 { 00:19:07.934 "name": "raid_bdev1", 00:19:07.934 "raid_level": "raid1", 00:19:07.934 "base_bdevs": [ 00:19:07.934 "malloc1", 00:19:07.934 "malloc2", 00:19:07.934 "malloc3", 00:19:07.934 "malloc4" 00:19:07.934 ], 00:19:07.934 "superblock": false, 00:19:07.934 "method": "bdev_raid_create", 00:19:07.934 "req_id": 1 00:19:07.934 } 00:19:07.934 Got JSON-RPC error response 00:19:07.934 response: 00:19:07.934 { 00:19:07.934 "code": -17, 00:19:07.934 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:07.934 } 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 [2024-11-06 09:12:06.876397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.934 [2024-11-06 09:12:06.876573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.934 [2024-11-06 09:12:06.876743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:07.934 [2024-11-06 09:12:06.876821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.934 [2024-11-06 09:12:06.879569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.934 [2024-11-06 09:12:06.879716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.934 [2024-11-06 09:12:06.879897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:07.934 pt1 00:19:07.934 [2024-11-06 09:12:06.880071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.934 "name": "raid_bdev1", 00:19:07.934 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:07.934 "strip_size_kb": 0, 00:19:07.934 "state": "configuring", 00:19:07.934 "raid_level": "raid1", 00:19:07.934 "superblock": true, 00:19:07.934 "num_base_bdevs": 4, 00:19:07.934 "num_base_bdevs_discovered": 1, 00:19:07.934 "num_base_bdevs_operational": 4, 00:19:07.934 "base_bdevs_list": [ 00:19:07.934 { 00:19:07.934 "name": "pt1", 00:19:07.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.934 "is_configured": true, 00:19:07.934 "data_offset": 2048, 00:19:07.935 "data_size": 63488 00:19:07.935 }, 00:19:07.935 { 00:19:07.935 "name": null, 00:19:07.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.935 "is_configured": false, 00:19:07.935 "data_offset": 2048, 00:19:07.935 "data_size": 63488 00:19:07.935 }, 00:19:07.935 { 00:19:07.935 "name": null, 00:19:07.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.935 "is_configured": false, 00:19:07.935 "data_offset": 2048, 00:19:07.935 "data_size": 63488 00:19:07.935 }, 00:19:07.935 { 00:19:07.935 "name": null, 00:19:07.935 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.935 "is_configured": false, 00:19:07.935 "data_offset": 2048, 00:19:07.935 "data_size": 63488 00:19:07.935 } 00:19:07.935 ] 00:19:07.935 }' 00:19:07.935 09:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.935 09:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.503 [2024-11-06 09:12:07.272531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:08.503 [2024-11-06 09:12:07.272753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.503 [2024-11-06 09:12:07.272808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:08.503 [2024-11-06 09:12:07.272826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.503 [2024-11-06 09:12:07.273611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.503 [2024-11-06 09:12:07.273644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:08.503 [2024-11-06 09:12:07.273799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:08.503 [2024-11-06 09:12:07.273845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.503 pt2 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.503 [2024-11-06 09:12:07.284449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.503 "name": "raid_bdev1", 00:19:08.503 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:08.503 "strip_size_kb": 0, 00:19:08.503 "state": "configuring", 00:19:08.503 "raid_level": "raid1", 00:19:08.503 "superblock": true, 00:19:08.503 "num_base_bdevs": 4, 00:19:08.503 "num_base_bdevs_discovered": 1, 00:19:08.503 "num_base_bdevs_operational": 4, 00:19:08.503 "base_bdevs_list": [ 00:19:08.503 { 00:19:08.503 "name": "pt1", 00:19:08.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:08.503 "is_configured": true, 00:19:08.503 "data_offset": 2048, 00:19:08.503 "data_size": 63488 00:19:08.503 }, 00:19:08.503 { 00:19:08.503 "name": null, 00:19:08.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.503 "is_configured": false, 00:19:08.503 "data_offset": 0, 00:19:08.503 "data_size": 63488 00:19:08.503 }, 00:19:08.503 { 00:19:08.503 "name": null, 00:19:08.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.503 "is_configured": false, 00:19:08.503 "data_offset": 2048, 00:19:08.503 "data_size": 63488 00:19:08.503 }, 00:19:08.503 { 00:19:08.503 "name": null, 00:19:08.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.503 "is_configured": false, 00:19:08.503 "data_offset": 2048, 00:19:08.503 "data_size": 63488 00:19:08.503 } 00:19:08.503 ] 00:19:08.503 }' 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.503 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.762 [2024-11-06 09:12:07.727831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:08.762 [2024-11-06 09:12:07.727901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.762 [2024-11-06 09:12:07.727932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:08.762 [2024-11-06 09:12:07.727945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.762 [2024-11-06 09:12:07.728427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.762 [2024-11-06 09:12:07.728448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:08.762 [2024-11-06 09:12:07.728540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:08.762 [2024-11-06 09:12:07.728562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.762 pt2 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.762 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.763 [2024-11-06 09:12:07.739804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:08.763 [2024-11-06 09:12:07.739988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.763 [2024-11-06 09:12:07.740046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:08.763 [2024-11-06 09:12:07.740135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.763 [2024-11-06 09:12:07.740643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.763 [2024-11-06 09:12:07.740778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:08.763 [2024-11-06 09:12:07.740939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:08.763 [2024-11-06 09:12:07.741087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:08.763 pt3 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.763 [2024-11-06 09:12:07.751762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:08.763 [2024-11-06 09:12:07.751918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.763 [2024-11-06 09:12:07.751975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:08.763 [2024-11-06 09:12:07.752088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.763 [2024-11-06 09:12:07.752632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.763 [2024-11-06 09:12:07.752761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:08.763 [2024-11-06 09:12:07.752921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:08.763 [2024-11-06 09:12:07.753036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:08.763 [2024-11-06 09:12:07.753319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:08.763 [2024-11-06 09:12:07.753448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.763 [2024-11-06 09:12:07.753760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.763 [2024-11-06 09:12:07.753958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:08.763 [2024-11-06 09:12:07.753977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:08.763 [2024-11-06 09:12:07.754158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.763 pt4 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.763 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.022 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.022 "name": "raid_bdev1", 00:19:09.022 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:09.022 "strip_size_kb": 0, 00:19:09.022 "state": "online", 00:19:09.022 "raid_level": "raid1", 00:19:09.022 "superblock": true, 00:19:09.022 "num_base_bdevs": 4, 00:19:09.022 "num_base_bdevs_discovered": 4, 00:19:09.022 "num_base_bdevs_operational": 4, 00:19:09.022 "base_bdevs_list": [ 00:19:09.022 { 00:19:09.022 "name": "pt1", 00:19:09.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.022 "is_configured": true, 00:19:09.022 "data_offset": 2048, 00:19:09.022 "data_size": 63488 00:19:09.022 }, 00:19:09.022 { 00:19:09.022 "name": "pt2", 00:19:09.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.022 "is_configured": true, 00:19:09.022 "data_offset": 2048, 00:19:09.022 "data_size": 63488 00:19:09.022 }, 00:19:09.022 { 00:19:09.022 "name": "pt3", 00:19:09.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.022 "is_configured": true, 00:19:09.022 "data_offset": 2048, 00:19:09.022 "data_size": 63488 00:19:09.022 }, 00:19:09.022 { 00:19:09.022 "name": "pt4", 00:19:09.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.022 "is_configured": true, 00:19:09.022 "data_offset": 2048, 00:19:09.022 "data_size": 63488 00:19:09.022 } 00:19:09.022 ] 00:19:09.022 }' 00:19:09.022 09:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.022 09:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.280 [2024-11-06 09:12:08.215649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.280 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.280 "name": "raid_bdev1", 00:19:09.280 "aliases": [ 00:19:09.280 "12f987cf-891a-45a0-840d-21c159cb3b3a" 00:19:09.280 ], 00:19:09.280 "product_name": "Raid Volume", 00:19:09.280 "block_size": 512, 00:19:09.280 "num_blocks": 63488, 00:19:09.280 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:09.280 "assigned_rate_limits": { 00:19:09.280 "rw_ios_per_sec": 0, 00:19:09.280 "rw_mbytes_per_sec": 0, 00:19:09.280 "r_mbytes_per_sec": 0, 00:19:09.280 "w_mbytes_per_sec": 0 00:19:09.280 }, 00:19:09.280 "claimed": false, 00:19:09.280 "zoned": false, 00:19:09.280 "supported_io_types": { 00:19:09.280 "read": true, 00:19:09.280 "write": true, 00:19:09.280 "unmap": false, 00:19:09.280 "flush": false, 00:19:09.281 "reset": true, 00:19:09.281 "nvme_admin": false, 00:19:09.281 "nvme_io": false, 00:19:09.281 "nvme_io_md": false, 00:19:09.281 "write_zeroes": true, 00:19:09.281 "zcopy": false, 00:19:09.281 "get_zone_info": false, 00:19:09.281 "zone_management": false, 00:19:09.281 "zone_append": false, 00:19:09.281 "compare": false, 00:19:09.281 "compare_and_write": false, 00:19:09.281 "abort": false, 00:19:09.281 "seek_hole": false, 00:19:09.281 "seek_data": false, 00:19:09.281 "copy": false, 00:19:09.281 "nvme_iov_md": false 00:19:09.281 }, 00:19:09.281 "memory_domains": [ 00:19:09.281 { 00:19:09.281 "dma_device_id": "system", 00:19:09.281 "dma_device_type": 1 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.281 "dma_device_type": 2 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "system", 00:19:09.281 "dma_device_type": 1 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.281 "dma_device_type": 2 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "system", 00:19:09.281 "dma_device_type": 1 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.281 "dma_device_type": 2 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "system", 00:19:09.281 "dma_device_type": 1 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.281 "dma_device_type": 2 00:19:09.281 } 00:19:09.281 ], 00:19:09.281 "driver_specific": { 00:19:09.281 "raid": { 00:19:09.281 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:09.281 "strip_size_kb": 0, 00:19:09.281 "state": "online", 00:19:09.281 "raid_level": "raid1", 00:19:09.281 "superblock": true, 00:19:09.281 "num_base_bdevs": 4, 00:19:09.281 "num_base_bdevs_discovered": 4, 00:19:09.281 "num_base_bdevs_operational": 4, 00:19:09.281 "base_bdevs_list": [ 00:19:09.281 { 00:19:09.281 "name": "pt1", 00:19:09.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.281 "is_configured": true, 00:19:09.281 "data_offset": 2048, 00:19:09.281 "data_size": 63488 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "name": "pt2", 00:19:09.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.281 "is_configured": true, 00:19:09.281 "data_offset": 2048, 00:19:09.281 "data_size": 63488 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "name": "pt3", 00:19:09.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.281 "is_configured": true, 00:19:09.281 "data_offset": 2048, 00:19:09.281 "data_size": 63488 00:19:09.281 }, 00:19:09.281 { 00:19:09.281 "name": "pt4", 00:19:09.281 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.281 "is_configured": true, 00:19:09.281 "data_offset": 2048, 00:19:09.281 "data_size": 63488 00:19:09.281 } 00:19:09.281 ] 00:19:09.281 } 00:19:09.281 } 00:19:09.281 }' 00:19:09.281 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.281 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:09.281 pt2 00:19:09.281 pt3 00:19:09.281 pt4' 00:19:09.281 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.540 [2024-11-06 09:12:08.543009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.540 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 12f987cf-891a-45a0-840d-21c159cb3b3a '!=' 12f987cf-891a-45a0-840d-21c159cb3b3a ']' 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.798 [2024-11-06 09:12:08.586675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.798 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.798 "name": "raid_bdev1", 00:19:09.798 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:09.798 "strip_size_kb": 0, 00:19:09.798 "state": "online", 00:19:09.798 "raid_level": "raid1", 00:19:09.798 "superblock": true, 00:19:09.798 "num_base_bdevs": 4, 00:19:09.798 "num_base_bdevs_discovered": 3, 00:19:09.799 "num_base_bdevs_operational": 3, 00:19:09.799 "base_bdevs_list": [ 00:19:09.799 { 00:19:09.799 "name": null, 00:19:09.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.799 "is_configured": false, 00:19:09.799 "data_offset": 0, 00:19:09.799 "data_size": 63488 00:19:09.799 }, 00:19:09.799 { 00:19:09.799 "name": "pt2", 00:19:09.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.799 "is_configured": true, 00:19:09.799 "data_offset": 2048, 00:19:09.799 "data_size": 63488 00:19:09.799 }, 00:19:09.799 { 00:19:09.799 "name": "pt3", 00:19:09.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.799 "is_configured": true, 00:19:09.799 "data_offset": 2048, 00:19:09.799 "data_size": 63488 00:19:09.799 }, 00:19:09.799 { 00:19:09.799 "name": "pt4", 00:19:09.799 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.799 "is_configured": true, 00:19:09.799 "data_offset": 2048, 00:19:09.799 "data_size": 63488 00:19:09.799 } 00:19:09.799 ] 00:19:09.799 }' 00:19:09.799 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.799 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 [2024-11-06 09:12:08.990096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.057 [2024-11-06 09:12:08.990132] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.057 [2024-11-06 09:12:08.990228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.057 [2024-11-06 09:12:08.990332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.057 [2024-11-06 09:12:08.990346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 [2024-11-06 09:12:09.070019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:10.057 [2024-11-06 09:12:09.070191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.057 [2024-11-06 09:12:09.070249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:10.057 [2024-11-06 09:12:09.070357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.057 [2024-11-06 09:12:09.072856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.057 [2024-11-06 09:12:09.072996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:10.057 [2024-11-06 09:12:09.073163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:10.057 [2024-11-06 09:12:09.073247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:10.057 pt2 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.316 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.316 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.316 "name": "raid_bdev1", 00:19:10.316 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:10.316 "strip_size_kb": 0, 00:19:10.316 "state": "configuring", 00:19:10.316 "raid_level": "raid1", 00:19:10.316 "superblock": true, 00:19:10.316 "num_base_bdevs": 4, 00:19:10.316 "num_base_bdevs_discovered": 1, 00:19:10.316 "num_base_bdevs_operational": 3, 00:19:10.316 "base_bdevs_list": [ 00:19:10.316 { 00:19:10.316 "name": null, 00:19:10.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.316 "is_configured": false, 00:19:10.316 "data_offset": 2048, 00:19:10.316 "data_size": 63488 00:19:10.316 }, 00:19:10.316 { 00:19:10.316 "name": "pt2", 00:19:10.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.316 "is_configured": true, 00:19:10.316 "data_offset": 2048, 00:19:10.316 "data_size": 63488 00:19:10.316 }, 00:19:10.316 { 00:19:10.316 "name": null, 00:19:10.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:10.316 "is_configured": false, 00:19:10.316 "data_offset": 2048, 00:19:10.316 "data_size": 63488 00:19:10.316 }, 00:19:10.316 { 00:19:10.316 "name": null, 00:19:10.316 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:10.316 "is_configured": false, 00:19:10.316 "data_offset": 2048, 00:19:10.316 "data_size": 63488 00:19:10.316 } 00:19:10.316 ] 00:19:10.316 }' 00:19:10.316 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.316 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.575 [2024-11-06 09:12:09.502066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:10.575 [2024-11-06 09:12:09.502134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.575 [2024-11-06 09:12:09.502161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:10.575 [2024-11-06 09:12:09.502174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.575 [2024-11-06 09:12:09.502683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.575 [2024-11-06 09:12:09.502715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:10.575 [2024-11-06 09:12:09.502846] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:10.575 [2024-11-06 09:12:09.502877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:10.575 pt3 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.575 "name": "raid_bdev1", 00:19:10.575 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:10.575 "strip_size_kb": 0, 00:19:10.575 "state": "configuring", 00:19:10.575 "raid_level": "raid1", 00:19:10.575 "superblock": true, 00:19:10.575 "num_base_bdevs": 4, 00:19:10.575 "num_base_bdevs_discovered": 2, 00:19:10.575 "num_base_bdevs_operational": 3, 00:19:10.575 "base_bdevs_list": [ 00:19:10.575 { 00:19:10.575 "name": null, 00:19:10.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.575 "is_configured": false, 00:19:10.575 "data_offset": 2048, 00:19:10.575 "data_size": 63488 00:19:10.575 }, 00:19:10.575 { 00:19:10.575 "name": "pt2", 00:19:10.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.575 "is_configured": true, 00:19:10.575 "data_offset": 2048, 00:19:10.575 "data_size": 63488 00:19:10.575 }, 00:19:10.575 { 00:19:10.575 "name": "pt3", 00:19:10.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:10.575 "is_configured": true, 00:19:10.575 "data_offset": 2048, 00:19:10.575 "data_size": 63488 00:19:10.575 }, 00:19:10.575 { 00:19:10.575 "name": null, 00:19:10.575 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:10.575 "is_configured": false, 00:19:10.575 "data_offset": 2048, 00:19:10.575 "data_size": 63488 00:19:10.575 } 00:19:10.575 ] 00:19:10.575 }' 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.575 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.141 [2024-11-06 09:12:09.918092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:11.141 [2024-11-06 09:12:09.918160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.141 [2024-11-06 09:12:09.918187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:11.141 [2024-11-06 09:12:09.918200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.141 [2024-11-06 09:12:09.918701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.141 [2024-11-06 09:12:09.918722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:11.141 [2024-11-06 09:12:09.918807] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:11.141 [2024-11-06 09:12:09.918838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:11.141 [2024-11-06 09:12:09.918983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:11.141 [2024-11-06 09:12:09.918994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:11.141 [2024-11-06 09:12:09.919257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:11.141 [2024-11-06 09:12:09.919418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:11.141 [2024-11-06 09:12:09.919433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:11.141 [2024-11-06 09:12:09.919562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.141 pt4 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.141 "name": "raid_bdev1", 00:19:11.141 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:11.141 "strip_size_kb": 0, 00:19:11.141 "state": "online", 00:19:11.141 "raid_level": "raid1", 00:19:11.141 "superblock": true, 00:19:11.141 "num_base_bdevs": 4, 00:19:11.141 "num_base_bdevs_discovered": 3, 00:19:11.141 "num_base_bdevs_operational": 3, 00:19:11.141 "base_bdevs_list": [ 00:19:11.141 { 00:19:11.141 "name": null, 00:19:11.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.141 "is_configured": false, 00:19:11.141 "data_offset": 2048, 00:19:11.141 "data_size": 63488 00:19:11.141 }, 00:19:11.141 { 00:19:11.141 "name": "pt2", 00:19:11.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.141 "is_configured": true, 00:19:11.141 "data_offset": 2048, 00:19:11.141 "data_size": 63488 00:19:11.141 }, 00:19:11.141 { 00:19:11.141 "name": "pt3", 00:19:11.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:11.141 "is_configured": true, 00:19:11.141 "data_offset": 2048, 00:19:11.141 "data_size": 63488 00:19:11.141 }, 00:19:11.141 { 00:19:11.141 "name": "pt4", 00:19:11.141 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:11.141 "is_configured": true, 00:19:11.141 "data_offset": 2048, 00:19:11.141 "data_size": 63488 00:19:11.141 } 00:19:11.141 ] 00:19:11.141 }' 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.141 09:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.400 [2024-11-06 09:12:10.333991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:11.400 [2024-11-06 09:12:10.334022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.400 [2024-11-06 09:12:10.334106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.400 [2024-11-06 09:12:10.334199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.400 [2024-11-06 09:12:10.334217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.400 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.401 [2024-11-06 09:12:10.393882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:11.401 [2024-11-06 09:12:10.394067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.401 [2024-11-06 09:12:10.394095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:11.401 [2024-11-06 09:12:10.394110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.401 [2024-11-06 09:12:10.396573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.401 [2024-11-06 09:12:10.396615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:11.401 [2024-11-06 09:12:10.396703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:11.401 [2024-11-06 09:12:10.396754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:11.401 [2024-11-06 09:12:10.396897] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:11.401 [2024-11-06 09:12:10.396911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:11.401 [2024-11-06 09:12:10.396927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:11.401 [2024-11-06 09:12:10.397006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.401 [2024-11-06 09:12:10.397105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:11.401 pt1 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.401 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.659 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.659 "name": "raid_bdev1", 00:19:11.659 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:11.659 "strip_size_kb": 0, 00:19:11.659 "state": "configuring", 00:19:11.659 "raid_level": "raid1", 00:19:11.659 "superblock": true, 00:19:11.659 "num_base_bdevs": 4, 00:19:11.659 "num_base_bdevs_discovered": 2, 00:19:11.659 "num_base_bdevs_operational": 3, 00:19:11.659 "base_bdevs_list": [ 00:19:11.659 { 00:19:11.659 "name": null, 00:19:11.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.659 "is_configured": false, 00:19:11.659 "data_offset": 2048, 00:19:11.660 "data_size": 63488 00:19:11.660 }, 00:19:11.660 { 00:19:11.660 "name": "pt2", 00:19:11.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.660 "is_configured": true, 00:19:11.660 "data_offset": 2048, 00:19:11.660 "data_size": 63488 00:19:11.660 }, 00:19:11.660 { 00:19:11.660 "name": "pt3", 00:19:11.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:11.660 "is_configured": true, 00:19:11.660 "data_offset": 2048, 00:19:11.660 "data_size": 63488 00:19:11.660 }, 00:19:11.660 { 00:19:11.660 "name": null, 00:19:11.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:11.660 "is_configured": false, 00:19:11.660 "data_offset": 2048, 00:19:11.660 "data_size": 63488 00:19:11.660 } 00:19:11.660 ] 00:19:11.660 }' 00:19:11.660 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.660 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.947 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.947 [2024-11-06 09:12:10.865410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:11.947 [2024-11-06 09:12:10.865476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.947 [2024-11-06 09:12:10.865500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:11.948 [2024-11-06 09:12:10.865512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.948 [2024-11-06 09:12:10.865976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.948 [2024-11-06 09:12:10.865996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:11.948 [2024-11-06 09:12:10.866083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:11.948 [2024-11-06 09:12:10.866112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:11.948 [2024-11-06 09:12:10.866249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:11.948 [2024-11-06 09:12:10.866258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:11.948 [2024-11-06 09:12:10.866613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:11.948 [2024-11-06 09:12:10.866767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:11.948 [2024-11-06 09:12:10.866781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:11.948 [2024-11-06 09:12:10.866923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.948 pt4 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.948 "name": "raid_bdev1", 00:19:11.948 "uuid": "12f987cf-891a-45a0-840d-21c159cb3b3a", 00:19:11.948 "strip_size_kb": 0, 00:19:11.948 "state": "online", 00:19:11.948 "raid_level": "raid1", 00:19:11.948 "superblock": true, 00:19:11.948 "num_base_bdevs": 4, 00:19:11.948 "num_base_bdevs_discovered": 3, 00:19:11.948 "num_base_bdevs_operational": 3, 00:19:11.948 "base_bdevs_list": [ 00:19:11.948 { 00:19:11.948 "name": null, 00:19:11.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.948 "is_configured": false, 00:19:11.948 "data_offset": 2048, 00:19:11.948 "data_size": 63488 00:19:11.948 }, 00:19:11.948 { 00:19:11.948 "name": "pt2", 00:19:11.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.948 "is_configured": true, 00:19:11.948 "data_offset": 2048, 00:19:11.948 "data_size": 63488 00:19:11.948 }, 00:19:11.948 { 00:19:11.948 "name": "pt3", 00:19:11.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:11.948 "is_configured": true, 00:19:11.948 "data_offset": 2048, 00:19:11.948 "data_size": 63488 00:19:11.948 }, 00:19:11.948 { 00:19:11.948 "name": "pt4", 00:19:11.948 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:11.948 "is_configured": true, 00:19:11.948 "data_offset": 2048, 00:19:11.948 "data_size": 63488 00:19:11.948 } 00:19:11.948 ] 00:19:11.948 }' 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.948 09:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:12.514 [2024-11-06 09:12:11.341239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 12f987cf-891a-45a0-840d-21c159cb3b3a '!=' 12f987cf-891a-45a0-840d-21c159cb3b3a ']' 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74260 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74260 ']' 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74260 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74260 00:19:12.514 killing process with pid 74260 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74260' 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74260 00:19:12.514 [2024-11-06 09:12:11.425335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.514 [2024-11-06 09:12:11.425437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.514 09:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74260 00:19:12.514 [2024-11-06 09:12:11.425513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.514 [2024-11-06 09:12:11.425527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:13.081 [2024-11-06 09:12:11.830700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.015 09:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:14.015 00:19:14.015 real 0m8.349s 00:19:14.015 user 0m13.116s 00:19:14.015 sys 0m1.715s 00:19:14.015 09:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:14.015 ************************************ 00:19:14.015 END TEST raid_superblock_test 00:19:14.015 ************************************ 00:19:14.015 09:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.015 09:12:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:19:14.015 09:12:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:14.015 09:12:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:14.015 09:12:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.015 ************************************ 00:19:14.015 START TEST raid_read_error_test 00:19:14.015 ************************************ 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:14.015 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SO2OjlXTMp 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74753 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74753 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 74753 ']' 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.274 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.274 [2024-11-06 09:12:13.151973] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:19:14.274 [2024-11-06 09:12:13.152101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74753 ] 00:19:14.533 [2024-11-06 09:12:13.334122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.533 [2024-11-06 09:12:13.450891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.791 [2024-11-06 09:12:13.638959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.791 [2024-11-06 09:12:13.639027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.051 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.051 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:15.051 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:15.051 09:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:15.051 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.051 09:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.051 BaseBdev1_malloc 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.051 true 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.051 [2024-11-06 09:12:14.040735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:15.051 [2024-11-06 09:12:14.040793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.051 [2024-11-06 09:12:14.040816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:15.051 [2024-11-06 09:12:14.040831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.051 [2024-11-06 09:12:14.043276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.051 [2024-11-06 09:12:14.043354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.051 BaseBdev1 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.051 BaseBdev2_malloc 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.051 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.310 true 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.310 [2024-11-06 09:12:14.105931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:15.310 [2024-11-06 09:12:14.105988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.310 [2024-11-06 09:12:14.106007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:15.310 [2024-11-06 09:12:14.106020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.310 [2024-11-06 09:12:14.108375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.310 [2024-11-06 09:12:14.108543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:15.310 BaseBdev2 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.310 BaseBdev3_malloc 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.310 true 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.310 [2024-11-06 09:12:14.185918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:15.310 [2024-11-06 09:12:14.185971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.310 [2024-11-06 09:12:14.185992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:15.310 [2024-11-06 09:12:14.186006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.310 [2024-11-06 09:12:14.188352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.310 [2024-11-06 09:12:14.188504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:15.310 BaseBdev3 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.310 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.311 BaseBdev4_malloc 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.311 true 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.311 [2024-11-06 09:12:14.256667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:15.311 [2024-11-06 09:12:14.256720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.311 [2024-11-06 09:12:14.256740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:15.311 [2024-11-06 09:12:14.256754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.311 [2024-11-06 09:12:14.259075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.311 [2024-11-06 09:12:14.259118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:15.311 BaseBdev4 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.311 [2024-11-06 09:12:14.268712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.311 [2024-11-06 09:12:14.270784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.311 [2024-11-06 09:12:14.271011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.311 [2024-11-06 09:12:14.271106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:15.311 [2024-11-06 09:12:14.271358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:15.311 [2024-11-06 09:12:14.271375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:15.311 [2024-11-06 09:12:14.271632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:15.311 [2024-11-06 09:12:14.271799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:15.311 [2024-11-06 09:12:14.271810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:15.311 [2024-11-06 09:12:14.271954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.311 "name": "raid_bdev1", 00:19:15.311 "uuid": "6f59af2f-40a2-449d-9450-0aad61f4d2c9", 00:19:15.311 "strip_size_kb": 0, 00:19:15.311 "state": "online", 00:19:15.311 "raid_level": "raid1", 00:19:15.311 "superblock": true, 00:19:15.311 "num_base_bdevs": 4, 00:19:15.311 "num_base_bdevs_discovered": 4, 00:19:15.311 "num_base_bdevs_operational": 4, 00:19:15.311 "base_bdevs_list": [ 00:19:15.311 { 00:19:15.311 "name": "BaseBdev1", 00:19:15.311 "uuid": "f1138c98-d8f7-56fc-9eca-fe4689d0ffaa", 00:19:15.311 "is_configured": true, 00:19:15.311 "data_offset": 2048, 00:19:15.311 "data_size": 63488 00:19:15.311 }, 00:19:15.311 { 00:19:15.311 "name": "BaseBdev2", 00:19:15.311 "uuid": "597e938c-ee90-5b17-8b12-09f3f0c2c026", 00:19:15.311 "is_configured": true, 00:19:15.311 "data_offset": 2048, 00:19:15.311 "data_size": 63488 00:19:15.311 }, 00:19:15.311 { 00:19:15.311 "name": "BaseBdev3", 00:19:15.311 "uuid": "746923a2-8710-53e6-9205-0858776395ff", 00:19:15.311 "is_configured": true, 00:19:15.311 "data_offset": 2048, 00:19:15.311 "data_size": 63488 00:19:15.311 }, 00:19:15.311 { 00:19:15.311 "name": "BaseBdev4", 00:19:15.311 "uuid": "184361a8-2190-5b9b-a3a9-df3dca41a38c", 00:19:15.311 "is_configured": true, 00:19:15.311 "data_offset": 2048, 00:19:15.311 "data_size": 63488 00:19:15.311 } 00:19:15.311 ] 00:19:15.311 }' 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.311 09:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.878 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:15.878 09:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:15.878 [2024-11-06 09:12:14.825347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.816 "name": "raid_bdev1", 00:19:16.816 "uuid": "6f59af2f-40a2-449d-9450-0aad61f4d2c9", 00:19:16.816 "strip_size_kb": 0, 00:19:16.816 "state": "online", 00:19:16.816 "raid_level": "raid1", 00:19:16.816 "superblock": true, 00:19:16.816 "num_base_bdevs": 4, 00:19:16.816 "num_base_bdevs_discovered": 4, 00:19:16.816 "num_base_bdevs_operational": 4, 00:19:16.816 "base_bdevs_list": [ 00:19:16.816 { 00:19:16.816 "name": "BaseBdev1", 00:19:16.816 "uuid": "f1138c98-d8f7-56fc-9eca-fe4689d0ffaa", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 }, 00:19:16.816 { 00:19:16.816 "name": "BaseBdev2", 00:19:16.816 "uuid": "597e938c-ee90-5b17-8b12-09f3f0c2c026", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 }, 00:19:16.816 { 00:19:16.816 "name": "BaseBdev3", 00:19:16.816 "uuid": "746923a2-8710-53e6-9205-0858776395ff", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 }, 00:19:16.816 { 00:19:16.816 "name": "BaseBdev4", 00:19:16.816 "uuid": "184361a8-2190-5b9b-a3a9-df3dca41a38c", 00:19:16.816 "is_configured": true, 00:19:16.816 "data_offset": 2048, 00:19:16.816 "data_size": 63488 00:19:16.816 } 00:19:16.816 ] 00:19:16.816 }' 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.816 09:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.384 [2024-11-06 09:12:16.160268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:17.384 [2024-11-06 09:12:16.160305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.384 [2024-11-06 09:12:16.163145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.384 [2024-11-06 09:12:16.163390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.384 [2024-11-06 09:12:16.163542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.384 [2024-11-06 09:12:16.163560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:17.384 { 00:19:17.384 "results": [ 00:19:17.384 { 00:19:17.384 "job": "raid_bdev1", 00:19:17.384 "core_mask": "0x1", 00:19:17.384 "workload": "randrw", 00:19:17.384 "percentage": 50, 00:19:17.384 "status": "finished", 00:19:17.384 "queue_depth": 1, 00:19:17.384 "io_size": 131072, 00:19:17.384 "runtime": 1.334943, 00:19:17.384 "iops": 11145.794239903877, 00:19:17.384 "mibps": 1393.2242799879846, 00:19:17.384 "io_failed": 0, 00:19:17.384 "io_timeout": 0, 00:19:17.384 "avg_latency_us": 87.0907706098269, 00:19:17.384 "min_latency_us": 23.749397590361447, 00:19:17.384 "max_latency_us": 1460.7421686746989 00:19:17.384 } 00:19:17.384 ], 00:19:17.384 "core_count": 1 00:19:17.384 } 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74753 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 74753 ']' 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 74753 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74753 00:19:17.384 killing process with pid 74753 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74753' 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 74753 00:19:17.384 [2024-11-06 09:12:16.195928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.384 09:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 74753 00:19:17.643 [2024-11-06 09:12:16.533573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SO2OjlXTMp 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:18.766 ************************************ 00:19:18.766 END TEST raid_read_error_test 00:19:18.766 ************************************ 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:18.766 00:19:18.766 real 0m4.722s 00:19:18.766 user 0m5.475s 00:19:18.766 sys 0m0.657s 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.766 09:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.025 09:12:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:19:19.025 09:12:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:19.025 09:12:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:19.025 09:12:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.025 ************************************ 00:19:19.025 START TEST raid_write_error_test 00:19:19.025 ************************************ 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DuG94naoUt 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74893 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74893 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 74893 ']' 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.025 09:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.025 [2024-11-06 09:12:17.964877] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:19:19.025 [2024-11-06 09:12:17.965069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74893 ] 00:19:19.283 [2024-11-06 09:12:18.150302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.283 [2024-11-06 09:12:18.273042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.542 [2024-11-06 09:12:18.482887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.542 [2024-11-06 09:12:18.482959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 BaseBdev1_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 true 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 [2024-11-06 09:12:18.905262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:20.111 [2024-11-06 09:12:18.905489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.111 [2024-11-06 09:12:18.905524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:20.111 [2024-11-06 09:12:18.905539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.111 [2024-11-06 09:12:18.908111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.111 [2024-11-06 09:12:18.908156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.111 BaseBdev1 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 BaseBdev2_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 true 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 [2024-11-06 09:12:18.972433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:20.111 [2024-11-06 09:12:18.972612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.111 [2024-11-06 09:12:18.972669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:20.111 [2024-11-06 09:12:18.972687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.111 [2024-11-06 09:12:18.975297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.111 [2024-11-06 09:12:18.975352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.111 BaseBdev2 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 BaseBdev3_malloc 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 true 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 [2024-11-06 09:12:19.051802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:20.111 [2024-11-06 09:12:19.051863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.111 [2024-11-06 09:12:19.051885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:20.111 [2024-11-06 09:12:19.051899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.111 [2024-11-06 09:12:19.054453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.111 [2024-11-06 09:12:19.054495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:20.111 BaseBdev3 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 BaseBdev4_malloc 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 true 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 [2024-11-06 09:12:19.122322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:20.111 [2024-11-06 09:12:19.122488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.111 [2024-11-06 09:12:19.122544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:20.111 [2024-11-06 09:12:19.122561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.111 [2024-11-06 09:12:19.124907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.111 [2024-11-06 09:12:19.124953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:20.111 BaseBdev4 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.111 [2024-11-06 09:12:19.134386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.111 [2024-11-06 09:12:19.136713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.111 [2024-11-06 09:12:19.136791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.111 [2024-11-06 09:12:19.136860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.111 [2024-11-06 09:12:19.137102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:20.111 [2024-11-06 09:12:19.137135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:20.111 [2024-11-06 09:12:19.137458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:20.111 [2024-11-06 09:12:19.137642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:20.111 [2024-11-06 09:12:19.137653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:20.111 [2024-11-06 09:12:19.137830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.111 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.370 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.370 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.370 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.370 "name": "raid_bdev1", 00:19:20.370 "uuid": "8cc9f83c-c17a-4ad1-ae79-5bab45707db9", 00:19:20.370 "strip_size_kb": 0, 00:19:20.370 "state": "online", 00:19:20.370 "raid_level": "raid1", 00:19:20.370 "superblock": true, 00:19:20.370 "num_base_bdevs": 4, 00:19:20.370 "num_base_bdevs_discovered": 4, 00:19:20.370 "num_base_bdevs_operational": 4, 00:19:20.370 "base_bdevs_list": [ 00:19:20.370 { 00:19:20.370 "name": "BaseBdev1", 00:19:20.370 "uuid": "77cdfb5d-db4e-5832-b80b-99958670bcd9", 00:19:20.370 "is_configured": true, 00:19:20.370 "data_offset": 2048, 00:19:20.370 "data_size": 63488 00:19:20.370 }, 00:19:20.370 { 00:19:20.370 "name": "BaseBdev2", 00:19:20.370 "uuid": "2260b686-69d8-5b1e-a659-8484d7fbe98a", 00:19:20.370 "is_configured": true, 00:19:20.370 "data_offset": 2048, 00:19:20.370 "data_size": 63488 00:19:20.370 }, 00:19:20.370 { 00:19:20.370 "name": "BaseBdev3", 00:19:20.370 "uuid": "0ecf61e3-99a8-5672-9087-3c9eea6212aa", 00:19:20.370 "is_configured": true, 00:19:20.370 "data_offset": 2048, 00:19:20.370 "data_size": 63488 00:19:20.370 }, 00:19:20.370 { 00:19:20.370 "name": "BaseBdev4", 00:19:20.370 "uuid": "18eb51fc-6771-5e1d-a46d-37df8c951ada", 00:19:20.370 "is_configured": true, 00:19:20.370 "data_offset": 2048, 00:19:20.370 "data_size": 63488 00:19:20.370 } 00:19:20.370 ] 00:19:20.370 }' 00:19:20.370 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.370 09:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.628 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:20.628 09:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:20.628 [2024-11-06 09:12:19.635689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:21.562 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.563 [2024-11-06 09:12:20.542152] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:21.563 [2024-11-06 09:12:20.542416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.563 [2024-11-06 09:12:20.542679] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.563 "name": "raid_bdev1", 00:19:21.563 "uuid": "8cc9f83c-c17a-4ad1-ae79-5bab45707db9", 00:19:21.563 "strip_size_kb": 0, 00:19:21.563 "state": "online", 00:19:21.563 "raid_level": "raid1", 00:19:21.563 "superblock": true, 00:19:21.563 "num_base_bdevs": 4, 00:19:21.563 "num_base_bdevs_discovered": 3, 00:19:21.563 "num_base_bdevs_operational": 3, 00:19:21.563 "base_bdevs_list": [ 00:19:21.563 { 00:19:21.563 "name": null, 00:19:21.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.563 "is_configured": false, 00:19:21.563 "data_offset": 0, 00:19:21.563 "data_size": 63488 00:19:21.563 }, 00:19:21.563 { 00:19:21.563 "name": "BaseBdev2", 00:19:21.563 "uuid": "2260b686-69d8-5b1e-a659-8484d7fbe98a", 00:19:21.563 "is_configured": true, 00:19:21.563 "data_offset": 2048, 00:19:21.563 "data_size": 63488 00:19:21.563 }, 00:19:21.563 { 00:19:21.563 "name": "BaseBdev3", 00:19:21.563 "uuid": "0ecf61e3-99a8-5672-9087-3c9eea6212aa", 00:19:21.563 "is_configured": true, 00:19:21.563 "data_offset": 2048, 00:19:21.563 "data_size": 63488 00:19:21.563 }, 00:19:21.563 { 00:19:21.563 "name": "BaseBdev4", 00:19:21.563 "uuid": "18eb51fc-6771-5e1d-a46d-37df8c951ada", 00:19:21.563 "is_configured": true, 00:19:21.563 "data_offset": 2048, 00:19:21.563 "data_size": 63488 00:19:21.563 } 00:19:21.563 ] 00:19:21.563 }' 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.563 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.134 [2024-11-06 09:12:20.987016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.134 [2024-11-06 09:12:20.987050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.134 [2024-11-06 09:12:20.990145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.134 [2024-11-06 09:12:20.990314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.134 [2024-11-06 09:12:20.990504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.134 [2024-11-06 09:12:20.990671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.134 { 00:19:22.134 "results": [ 00:19:22.134 { 00:19:22.134 "job": "raid_bdev1", 00:19:22.134 "core_mask": "0x1", 00:19:22.134 "workload": "randrw", 00:19:22.134 "percentage": 50, 00:19:22.134 "status": "finished", 00:19:22.134 "queue_depth": 1, 00:19:22.134 "io_size": 131072, 00:19:22.134 "runtime": 1.351306, 00:19:22.134 "iops": 11135.893720593263, 00:19:22.134 "mibps": 1391.986715074158, 00:19:22.134 "io_failed": 0, 00:19:22.134 "io_timeout": 0, 00:19:22.134 "avg_latency_us": 86.9527129250655, 00:19:22.134 "min_latency_us": 24.057831325301205, 00:19:22.134 "max_latency_us": 1559.4409638554216 00:19:22.134 } 00:19:22.134 ], 00:19:22.134 "core_count": 1 00:19:22.134 } 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74893 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 74893 ']' 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 74893 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:22.134 09:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:22.134 09:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74893 00:19:22.134 09:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:22.134 09:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:22.134 killing process with pid 74893 00:19:22.134 09:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74893' 00:19:22.134 09:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 74893 00:19:22.134 [2024-11-06 09:12:21.027362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.134 09:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 74893 00:19:22.392 [2024-11-06 09:12:21.377265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DuG94naoUt 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:23.765 00:19:23.765 real 0m4.776s 00:19:23.765 user 0m5.595s 00:19:23.765 sys 0m0.620s 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.765 09:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 ************************************ 00:19:23.765 END TEST raid_write_error_test 00:19:23.765 ************************************ 00:19:23.765 09:12:22 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:19:23.765 09:12:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:19:23.765 09:12:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:19:23.765 09:12:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:23.765 09:12:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.765 09:12:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 ************************************ 00:19:23.765 START TEST raid_rebuild_test 00:19:23.765 ************************************ 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75041 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75041 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75041 ']' 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.765 09:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 [2024-11-06 09:12:22.797487] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:19:23.765 [2024-11-06 09:12:22.797813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75041 ] 00:19:23.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:23.765 Zero copy mechanism will not be used. 00:19:24.023 [2024-11-06 09:12:22.981521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.281 [2024-11-06 09:12:23.103004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.281 [2024-11-06 09:12:23.316219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.281 [2024-11-06 09:12:23.316484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.853 BaseBdev1_malloc 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.853 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.853 [2024-11-06 09:12:23.701365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:24.853 [2024-11-06 09:12:23.701599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.853 [2024-11-06 09:12:23.701660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:24.853 [2024-11-06 09:12:23.701753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.853 [2024-11-06 09:12:23.704257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.853 [2024-11-06 09:12:23.704412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:24.854 BaseBdev1 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 BaseBdev2_malloc 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 [2024-11-06 09:12:23.759575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:24.854 [2024-11-06 09:12:23.759775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.854 [2024-11-06 09:12:23.759833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:24.854 [2024-11-06 09:12:23.759932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.854 [2024-11-06 09:12:23.762355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.854 [2024-11-06 09:12:23.762499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:24.854 BaseBdev2 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 spare_malloc 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 spare_delay 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 [2024-11-06 09:12:23.844004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:24.854 [2024-11-06 09:12:23.844178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.854 [2024-11-06 09:12:23.844233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:24.854 [2024-11-06 09:12:23.844352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.854 [2024-11-06 09:12:23.846817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.854 [2024-11-06 09:12:23.846973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:24.854 spare 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 [2024-11-06 09:12:23.856043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.854 [2024-11-06 09:12:23.858190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.854 [2024-11-06 09:12:23.858409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:24.854 [2024-11-06 09:12:23.858435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:24.854 [2024-11-06 09:12:23.858715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:24.854 [2024-11-06 09:12:23.858877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:24.854 [2024-11-06 09:12:23.858902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:24.854 [2024-11-06 09:12:23.859053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.854 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.114 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.114 "name": "raid_bdev1", 00:19:25.114 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:25.114 "strip_size_kb": 0, 00:19:25.114 "state": "online", 00:19:25.114 "raid_level": "raid1", 00:19:25.114 "superblock": false, 00:19:25.114 "num_base_bdevs": 2, 00:19:25.114 "num_base_bdevs_discovered": 2, 00:19:25.114 "num_base_bdevs_operational": 2, 00:19:25.114 "base_bdevs_list": [ 00:19:25.114 { 00:19:25.114 "name": "BaseBdev1", 00:19:25.114 "uuid": "0e5ca12f-d086-527f-958f-37de0b337c56", 00:19:25.114 "is_configured": true, 00:19:25.114 "data_offset": 0, 00:19:25.114 "data_size": 65536 00:19:25.114 }, 00:19:25.114 { 00:19:25.114 "name": "BaseBdev2", 00:19:25.114 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:25.114 "is_configured": true, 00:19:25.114 "data_offset": 0, 00:19:25.114 "data_size": 65536 00:19:25.114 } 00:19:25.114 ] 00:19:25.114 }' 00:19:25.114 09:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.114 09:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:25.379 [2024-11-06 09:12:24.251792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.379 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:25.638 [2024-11-06 09:12:24.527220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:25.638 /dev/nbd0 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.638 1+0 records in 00:19:25.638 1+0 records out 00:19:25.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410414 s, 10.0 MB/s 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:25.638 09:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:30.971 65536+0 records in 00:19:30.971 65536+0 records out 00:19:30.971 33554432 bytes (34 MB, 32 MiB) copied, 5.26777 s, 6.4 MB/s 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.971 09:12:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:31.230 [2024-11-06 09:12:30.075445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.230 [2024-11-06 09:12:30.109318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.230 "name": "raid_bdev1", 00:19:31.230 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:31.230 "strip_size_kb": 0, 00:19:31.230 "state": "online", 00:19:31.230 "raid_level": "raid1", 00:19:31.230 "superblock": false, 00:19:31.230 "num_base_bdevs": 2, 00:19:31.230 "num_base_bdevs_discovered": 1, 00:19:31.230 "num_base_bdevs_operational": 1, 00:19:31.230 "base_bdevs_list": [ 00:19:31.230 { 00:19:31.230 "name": null, 00:19:31.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.230 "is_configured": false, 00:19:31.230 "data_offset": 0, 00:19:31.230 "data_size": 65536 00:19:31.230 }, 00:19:31.230 { 00:19:31.230 "name": "BaseBdev2", 00:19:31.230 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:31.230 "is_configured": true, 00:19:31.230 "data_offset": 0, 00:19:31.230 "data_size": 65536 00:19:31.230 } 00:19:31.230 ] 00:19:31.230 }' 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.230 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.795 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.795 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.795 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.795 [2024-11-06 09:12:30.552701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.795 [2024-11-06 09:12:30.570564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:19:31.795 09:12:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.795 09:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:31.795 [2024-11-06 09:12:30.572797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.766 "name": "raid_bdev1", 00:19:32.766 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:32.766 "strip_size_kb": 0, 00:19:32.766 "state": "online", 00:19:32.766 "raid_level": "raid1", 00:19:32.766 "superblock": false, 00:19:32.766 "num_base_bdevs": 2, 00:19:32.766 "num_base_bdevs_discovered": 2, 00:19:32.766 "num_base_bdevs_operational": 2, 00:19:32.766 "process": { 00:19:32.766 "type": "rebuild", 00:19:32.766 "target": "spare", 00:19:32.766 "progress": { 00:19:32.766 "blocks": 20480, 00:19:32.766 "percent": 31 00:19:32.766 } 00:19:32.766 }, 00:19:32.766 "base_bdevs_list": [ 00:19:32.766 { 00:19:32.766 "name": "spare", 00:19:32.766 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:32.766 "is_configured": true, 00:19:32.766 "data_offset": 0, 00:19:32.766 "data_size": 65536 00:19:32.766 }, 00:19:32.766 { 00:19:32.766 "name": "BaseBdev2", 00:19:32.766 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:32.766 "is_configured": true, 00:19:32.766 "data_offset": 0, 00:19:32.766 "data_size": 65536 00:19:32.766 } 00:19:32.766 ] 00:19:32.766 }' 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.766 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.767 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.767 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:32.767 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.767 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.767 [2024-11-06 09:12:31.716481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.767 [2024-11-06 09:12:31.778752] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:32.767 [2024-11-06 09:12:31.779056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.767 [2024-11-06 09:12:31.779088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.767 [2024-11-06 09:12:31.779106] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.027 "name": "raid_bdev1", 00:19:33.027 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:33.027 "strip_size_kb": 0, 00:19:33.027 "state": "online", 00:19:33.027 "raid_level": "raid1", 00:19:33.027 "superblock": false, 00:19:33.027 "num_base_bdevs": 2, 00:19:33.027 "num_base_bdevs_discovered": 1, 00:19:33.027 "num_base_bdevs_operational": 1, 00:19:33.027 "base_bdevs_list": [ 00:19:33.027 { 00:19:33.027 "name": null, 00:19:33.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.027 "is_configured": false, 00:19:33.027 "data_offset": 0, 00:19:33.027 "data_size": 65536 00:19:33.027 }, 00:19:33.027 { 00:19:33.027 "name": "BaseBdev2", 00:19:33.027 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:33.027 "is_configured": true, 00:19:33.027 "data_offset": 0, 00:19:33.027 "data_size": 65536 00:19:33.027 } 00:19:33.027 ] 00:19:33.027 }' 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.027 09:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.285 "name": "raid_bdev1", 00:19:33.285 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:33.285 "strip_size_kb": 0, 00:19:33.285 "state": "online", 00:19:33.285 "raid_level": "raid1", 00:19:33.285 "superblock": false, 00:19:33.285 "num_base_bdevs": 2, 00:19:33.285 "num_base_bdevs_discovered": 1, 00:19:33.285 "num_base_bdevs_operational": 1, 00:19:33.285 "base_bdevs_list": [ 00:19:33.285 { 00:19:33.285 "name": null, 00:19:33.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.285 "is_configured": false, 00:19:33.285 "data_offset": 0, 00:19:33.285 "data_size": 65536 00:19:33.285 }, 00:19:33.285 { 00:19:33.285 "name": "BaseBdev2", 00:19:33.285 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:33.285 "is_configured": true, 00:19:33.285 "data_offset": 0, 00:19:33.285 "data_size": 65536 00:19:33.285 } 00:19:33.285 ] 00:19:33.285 }' 00:19:33.285 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.544 [2024-11-06 09:12:32.379449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.544 [2024-11-06 09:12:32.396294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.544 09:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:33.544 [2024-11-06 09:12:32.398678] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.521 "name": "raid_bdev1", 00:19:34.521 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:34.521 "strip_size_kb": 0, 00:19:34.521 "state": "online", 00:19:34.521 "raid_level": "raid1", 00:19:34.521 "superblock": false, 00:19:34.521 "num_base_bdevs": 2, 00:19:34.521 "num_base_bdevs_discovered": 2, 00:19:34.521 "num_base_bdevs_operational": 2, 00:19:34.521 "process": { 00:19:34.521 "type": "rebuild", 00:19:34.521 "target": "spare", 00:19:34.521 "progress": { 00:19:34.521 "blocks": 20480, 00:19:34.521 "percent": 31 00:19:34.521 } 00:19:34.521 }, 00:19:34.521 "base_bdevs_list": [ 00:19:34.521 { 00:19:34.521 "name": "spare", 00:19:34.521 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:34.521 "is_configured": true, 00:19:34.521 "data_offset": 0, 00:19:34.521 "data_size": 65536 00:19:34.521 }, 00:19:34.521 { 00:19:34.521 "name": "BaseBdev2", 00:19:34.521 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:34.521 "is_configured": true, 00:19:34.521 "data_offset": 0, 00:19:34.521 "data_size": 65536 00:19:34.521 } 00:19:34.521 ] 00:19:34.521 }' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.521 09:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.780 "name": "raid_bdev1", 00:19:34.780 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:34.780 "strip_size_kb": 0, 00:19:34.780 "state": "online", 00:19:34.780 "raid_level": "raid1", 00:19:34.780 "superblock": false, 00:19:34.780 "num_base_bdevs": 2, 00:19:34.780 "num_base_bdevs_discovered": 2, 00:19:34.780 "num_base_bdevs_operational": 2, 00:19:34.780 "process": { 00:19:34.780 "type": "rebuild", 00:19:34.780 "target": "spare", 00:19:34.780 "progress": { 00:19:34.780 "blocks": 22528, 00:19:34.780 "percent": 34 00:19:34.780 } 00:19:34.780 }, 00:19:34.780 "base_bdevs_list": [ 00:19:34.780 { 00:19:34.780 "name": "spare", 00:19:34.780 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:34.780 "is_configured": true, 00:19:34.780 "data_offset": 0, 00:19:34.780 "data_size": 65536 00:19:34.780 }, 00:19:34.780 { 00:19:34.780 "name": "BaseBdev2", 00:19:34.780 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:34.780 "is_configured": true, 00:19:34.780 "data_offset": 0, 00:19:34.780 "data_size": 65536 00:19:34.780 } 00:19:34.780 ] 00:19:34.780 }' 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.780 09:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.714 "name": "raid_bdev1", 00:19:35.714 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:35.714 "strip_size_kb": 0, 00:19:35.714 "state": "online", 00:19:35.714 "raid_level": "raid1", 00:19:35.714 "superblock": false, 00:19:35.714 "num_base_bdevs": 2, 00:19:35.714 "num_base_bdevs_discovered": 2, 00:19:35.714 "num_base_bdevs_operational": 2, 00:19:35.714 "process": { 00:19:35.714 "type": "rebuild", 00:19:35.714 "target": "spare", 00:19:35.714 "progress": { 00:19:35.714 "blocks": 45056, 00:19:35.714 "percent": 68 00:19:35.714 } 00:19:35.714 }, 00:19:35.714 "base_bdevs_list": [ 00:19:35.714 { 00:19:35.714 "name": "spare", 00:19:35.714 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:35.714 "is_configured": true, 00:19:35.714 "data_offset": 0, 00:19:35.714 "data_size": 65536 00:19:35.714 }, 00:19:35.714 { 00:19:35.714 "name": "BaseBdev2", 00:19:35.714 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:35.714 "is_configured": true, 00:19:35.714 "data_offset": 0, 00:19:35.714 "data_size": 65536 00:19:35.714 } 00:19:35.714 ] 00:19:35.714 }' 00:19:35.714 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.971 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.971 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.971 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.971 09:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.906 [2024-11-06 09:12:35.614598] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:36.906 [2024-11-06 09:12:35.614688] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:36.906 [2024-11-06 09:12:35.614773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.906 "name": "raid_bdev1", 00:19:36.906 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:36.906 "strip_size_kb": 0, 00:19:36.906 "state": "online", 00:19:36.906 "raid_level": "raid1", 00:19:36.906 "superblock": false, 00:19:36.906 "num_base_bdevs": 2, 00:19:36.906 "num_base_bdevs_discovered": 2, 00:19:36.906 "num_base_bdevs_operational": 2, 00:19:36.906 "base_bdevs_list": [ 00:19:36.906 { 00:19:36.906 "name": "spare", 00:19:36.906 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:36.906 "is_configured": true, 00:19:36.906 "data_offset": 0, 00:19:36.906 "data_size": 65536 00:19:36.906 }, 00:19:36.906 { 00:19:36.906 "name": "BaseBdev2", 00:19:36.906 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:36.906 "is_configured": true, 00:19:36.906 "data_offset": 0, 00:19:36.906 "data_size": 65536 00:19:36.906 } 00:19:36.906 ] 00:19:36.906 }' 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.906 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.907 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.907 09:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.907 09:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.180 09:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.180 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.180 "name": "raid_bdev1", 00:19:37.180 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:37.180 "strip_size_kb": 0, 00:19:37.180 "state": "online", 00:19:37.180 "raid_level": "raid1", 00:19:37.180 "superblock": false, 00:19:37.180 "num_base_bdevs": 2, 00:19:37.180 "num_base_bdevs_discovered": 2, 00:19:37.180 "num_base_bdevs_operational": 2, 00:19:37.180 "base_bdevs_list": [ 00:19:37.180 { 00:19:37.180 "name": "spare", 00:19:37.180 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:37.180 "is_configured": true, 00:19:37.180 "data_offset": 0, 00:19:37.180 "data_size": 65536 00:19:37.180 }, 00:19:37.180 { 00:19:37.180 "name": "BaseBdev2", 00:19:37.180 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:37.180 "is_configured": true, 00:19:37.180 "data_offset": 0, 00:19:37.180 "data_size": 65536 00:19:37.180 } 00:19:37.180 ] 00:19:37.180 }' 00:19:37.180 09:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.180 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.180 "name": "raid_bdev1", 00:19:37.180 "uuid": "a5574744-e678-43f5-86cf-ecfc82e2e3c5", 00:19:37.180 "strip_size_kb": 0, 00:19:37.180 "state": "online", 00:19:37.180 "raid_level": "raid1", 00:19:37.180 "superblock": false, 00:19:37.180 "num_base_bdevs": 2, 00:19:37.181 "num_base_bdevs_discovered": 2, 00:19:37.181 "num_base_bdevs_operational": 2, 00:19:37.181 "base_bdevs_list": [ 00:19:37.181 { 00:19:37.181 "name": "spare", 00:19:37.181 "uuid": "33cdc5f6-87d1-507a-bb1c-b976fe36ccaf", 00:19:37.181 "is_configured": true, 00:19:37.181 "data_offset": 0, 00:19:37.181 "data_size": 65536 00:19:37.181 }, 00:19:37.181 { 00:19:37.181 "name": "BaseBdev2", 00:19:37.181 "uuid": "a8c45292-aeb1-56b1-b3fa-d3935379b302", 00:19:37.181 "is_configured": true, 00:19:37.181 "data_offset": 0, 00:19:37.181 "data_size": 65536 00:19:37.181 } 00:19:37.181 ] 00:19:37.181 }' 00:19:37.181 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.181 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.439 [2024-11-06 09:12:36.419827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.439 [2024-11-06 09:12:36.419999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.439 [2024-11-06 09:12:36.420116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.439 [2024-11-06 09:12:36.420193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.439 [2024-11-06 09:12:36.420206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:37.439 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:37.698 /dev/nbd0 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:37.698 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:37.958 1+0 records in 00:19:37.958 1+0 records out 00:19:37.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417195 s, 9.8 MB/s 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:37.958 /dev/nbd1 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:37.958 09:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:38.217 09:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:38.217 1+0 records in 00:19:38.217 1+0 records out 00:19:38.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555019 s, 7.4 MB/s 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.217 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.475 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75041 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75041 ']' 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75041 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75041 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75041' 00:19:38.734 killing process with pid 75041 00:19:38.734 Received shutdown signal, test time was about 60.000000 seconds 00:19:38.734 00:19:38.734 Latency(us) 00:19:38.734 [2024-11-06T09:12:37.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.734 [2024-11-06T09:12:37.774Z] =================================================================================================================== 00:19:38.734 [2024-11-06T09:12:37.774Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75041 00:19:38.734 09:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75041 00:19:38.734 [2024-11-06 09:12:37.698197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.993 [2024-11-06 09:12:38.014631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.381 09:12:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:40.381 00:19:40.381 real 0m16.454s 00:19:40.381 user 0m17.689s 00:19:40.381 sys 0m3.753s 00:19:40.381 09:12:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:40.381 09:12:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 ************************************ 00:19:40.381 END TEST raid_rebuild_test 00:19:40.381 ************************************ 00:19:40.381 09:12:39 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:19:40.381 09:12:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:40.381 09:12:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:40.381 09:12:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 ************************************ 00:19:40.382 START TEST raid_rebuild_test_sb 00:19:40.382 ************************************ 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75466 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75466 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75466 ']' 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.382 09:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:40.382 Zero copy mechanism will not be used. 00:19:40.382 [2024-11-06 09:12:39.336191] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:19:40.382 [2024-11-06 09:12:39.336359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75466 ] 00:19:40.640 [2024-11-06 09:12:39.529994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.640 [2024-11-06 09:12:39.657911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.898 [2024-11-06 09:12:39.883619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.898 [2024-11-06 09:12:39.883877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.505 BaseBdev1_malloc 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.505 [2024-11-06 09:12:40.283614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:41.505 [2024-11-06 09:12:40.283869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.505 [2024-11-06 09:12:40.284020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:41.505 [2024-11-06 09:12:40.284139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.505 [2024-11-06 09:12:40.287086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.505 BaseBdev1 00:19:41.505 [2024-11-06 09:12:40.287287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.505 BaseBdev2_malloc 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.505 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.505 [2024-11-06 09:12:40.348493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:41.505 [2024-11-06 09:12:40.348738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.506 [2024-11-06 09:12:40.348893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:41.506 [2024-11-06 09:12:40.348995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.506 [2024-11-06 09:12:40.351762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.506 [2024-11-06 09:12:40.351933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:41.506 BaseBdev2 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 spare_malloc 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 spare_delay 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 [2024-11-06 09:12:40.441545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:41.506 [2024-11-06 09:12:40.441769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.506 [2024-11-06 09:12:40.441838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:41.506 [2024-11-06 09:12:40.441957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.506 [2024-11-06 09:12:40.444912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.506 [2024-11-06 09:12:40.445084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:41.506 spare 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 [2024-11-06 09:12:40.453657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.506 [2024-11-06 09:12:40.456085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.506 [2024-11-06 09:12:40.456417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:41.506 [2024-11-06 09:12:40.456542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:41.506 [2024-11-06 09:12:40.456935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:41.506 [2024-11-06 09:12:40.457248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:41.506 [2024-11-06 09:12:40.457295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:41.506 [2024-11-06 09:12:40.457541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.506 "name": "raid_bdev1", 00:19:41.506 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:41.506 "strip_size_kb": 0, 00:19:41.506 "state": "online", 00:19:41.506 "raid_level": "raid1", 00:19:41.506 "superblock": true, 00:19:41.506 "num_base_bdevs": 2, 00:19:41.506 "num_base_bdevs_discovered": 2, 00:19:41.506 "num_base_bdevs_operational": 2, 00:19:41.506 "base_bdevs_list": [ 00:19:41.506 { 00:19:41.506 "name": "BaseBdev1", 00:19:41.506 "uuid": "1139987c-47b8-5776-a800-61607369180c", 00:19:41.506 "is_configured": true, 00:19:41.506 "data_offset": 2048, 00:19:41.506 "data_size": 63488 00:19:41.506 }, 00:19:41.506 { 00:19:41.506 "name": "BaseBdev2", 00:19:41.506 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:41.506 "is_configured": true, 00:19:41.506 "data_offset": 2048, 00:19:41.506 "data_size": 63488 00:19:41.506 } 00:19:41.506 ] 00:19:41.506 }' 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.506 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.074 [2024-11-06 09:12:40.849762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.074 09:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:42.334 [2024-11-06 09:12:41.121517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:42.334 /dev/nbd0 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.334 1+0 records in 00:19:42.334 1+0 records out 00:19:42.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304604 s, 13.4 MB/s 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:42.334 09:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:46.557 63488+0 records in 00:19:46.557 63488+0 records out 00:19:46.557 32505856 bytes (33 MB, 31 MiB) copied, 4.24971 s, 7.6 MB/s 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.557 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.816 [2024-11-06 09:12:45.714527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.816 [2024-11-06 09:12:45.735401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.816 "name": "raid_bdev1", 00:19:46.816 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:46.816 "strip_size_kb": 0, 00:19:46.816 "state": "online", 00:19:46.816 "raid_level": "raid1", 00:19:46.816 "superblock": true, 00:19:46.816 "num_base_bdevs": 2, 00:19:46.816 "num_base_bdevs_discovered": 1, 00:19:46.816 "num_base_bdevs_operational": 1, 00:19:46.816 "base_bdevs_list": [ 00:19:46.816 { 00:19:46.816 "name": null, 00:19:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.816 "is_configured": false, 00:19:46.816 "data_offset": 0, 00:19:46.816 "data_size": 63488 00:19:46.816 }, 00:19:46.816 { 00:19:46.816 "name": "BaseBdev2", 00:19:46.816 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:46.816 "is_configured": true, 00:19:46.816 "data_offset": 2048, 00:19:46.816 "data_size": 63488 00:19:46.816 } 00:19:46.816 ] 00:19:46.816 }' 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.816 09:12:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.382 09:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.382 09:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.383 09:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 [2024-11-06 09:12:46.178785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.383 [2024-11-06 09:12:46.197681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:19:47.383 09:12:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.383 09:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:47.383 [2024-11-06 09:12:46.200042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.316 "name": "raid_bdev1", 00:19:48.316 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:48.316 "strip_size_kb": 0, 00:19:48.316 "state": "online", 00:19:48.316 "raid_level": "raid1", 00:19:48.316 "superblock": true, 00:19:48.316 "num_base_bdevs": 2, 00:19:48.316 "num_base_bdevs_discovered": 2, 00:19:48.316 "num_base_bdevs_operational": 2, 00:19:48.316 "process": { 00:19:48.316 "type": "rebuild", 00:19:48.316 "target": "spare", 00:19:48.316 "progress": { 00:19:48.316 "blocks": 20480, 00:19:48.316 "percent": 32 00:19:48.316 } 00:19:48.316 }, 00:19:48.316 "base_bdevs_list": [ 00:19:48.316 { 00:19:48.316 "name": "spare", 00:19:48.316 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:48.316 "is_configured": true, 00:19:48.316 "data_offset": 2048, 00:19:48.316 "data_size": 63488 00:19:48.316 }, 00:19:48.316 { 00:19:48.316 "name": "BaseBdev2", 00:19:48.316 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:48.316 "is_configured": true, 00:19:48.316 "data_offset": 2048, 00:19:48.316 "data_size": 63488 00:19:48.316 } 00:19:48.316 ] 00:19:48.316 }' 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.316 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.316 [2024-11-06 09:12:47.351854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.575 [2024-11-06 09:12:47.406991] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.575 [2024-11-06 09:12:47.407099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.575 [2024-11-06 09:12:47.407118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.575 [2024-11-06 09:12:47.407131] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.575 "name": "raid_bdev1", 00:19:48.575 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:48.575 "strip_size_kb": 0, 00:19:48.575 "state": "online", 00:19:48.575 "raid_level": "raid1", 00:19:48.575 "superblock": true, 00:19:48.575 "num_base_bdevs": 2, 00:19:48.575 "num_base_bdevs_discovered": 1, 00:19:48.575 "num_base_bdevs_operational": 1, 00:19:48.575 "base_bdevs_list": [ 00:19:48.575 { 00:19:48.575 "name": null, 00:19:48.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.575 "is_configured": false, 00:19:48.575 "data_offset": 0, 00:19:48.575 "data_size": 63488 00:19:48.575 }, 00:19:48.575 { 00:19:48.575 "name": "BaseBdev2", 00:19:48.575 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:48.575 "is_configured": true, 00:19:48.575 "data_offset": 2048, 00:19:48.575 "data_size": 63488 00:19:48.575 } 00:19:48.575 ] 00:19:48.575 }' 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.575 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.142 "name": "raid_bdev1", 00:19:49.142 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:49.142 "strip_size_kb": 0, 00:19:49.142 "state": "online", 00:19:49.142 "raid_level": "raid1", 00:19:49.142 "superblock": true, 00:19:49.142 "num_base_bdevs": 2, 00:19:49.142 "num_base_bdevs_discovered": 1, 00:19:49.142 "num_base_bdevs_operational": 1, 00:19:49.142 "base_bdevs_list": [ 00:19:49.142 { 00:19:49.142 "name": null, 00:19:49.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.142 "is_configured": false, 00:19:49.142 "data_offset": 0, 00:19:49.142 "data_size": 63488 00:19:49.142 }, 00:19:49.142 { 00:19:49.142 "name": "BaseBdev2", 00:19:49.142 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:49.142 "is_configured": true, 00:19:49.142 "data_offset": 2048, 00:19:49.142 "data_size": 63488 00:19:49.142 } 00:19:49.142 ] 00:19:49.142 }' 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.142 09:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.142 09:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.142 09:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.142 09:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.142 09:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.142 [2024-11-06 09:12:48.055447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.142 [2024-11-06 09:12:48.073120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:19:49.142 09:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.142 09:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:49.142 [2024-11-06 09:12:48.075599] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.076 09:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.334 "name": "raid_bdev1", 00:19:50.334 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:50.334 "strip_size_kb": 0, 00:19:50.334 "state": "online", 00:19:50.334 "raid_level": "raid1", 00:19:50.334 "superblock": true, 00:19:50.334 "num_base_bdevs": 2, 00:19:50.334 "num_base_bdevs_discovered": 2, 00:19:50.334 "num_base_bdevs_operational": 2, 00:19:50.334 "process": { 00:19:50.334 "type": "rebuild", 00:19:50.334 "target": "spare", 00:19:50.334 "progress": { 00:19:50.334 "blocks": 20480, 00:19:50.334 "percent": 32 00:19:50.334 } 00:19:50.334 }, 00:19:50.334 "base_bdevs_list": [ 00:19:50.334 { 00:19:50.334 "name": "spare", 00:19:50.334 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:50.334 "is_configured": true, 00:19:50.334 "data_offset": 2048, 00:19:50.334 "data_size": 63488 00:19:50.334 }, 00:19:50.334 { 00:19:50.334 "name": "BaseBdev2", 00:19:50.334 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:50.334 "is_configured": true, 00:19:50.334 "data_offset": 2048, 00:19:50.334 "data_size": 63488 00:19:50.334 } 00:19:50.334 ] 00:19:50.334 }' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:50.334 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=384 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.334 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.334 "name": "raid_bdev1", 00:19:50.334 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:50.334 "strip_size_kb": 0, 00:19:50.335 "state": "online", 00:19:50.335 "raid_level": "raid1", 00:19:50.335 "superblock": true, 00:19:50.335 "num_base_bdevs": 2, 00:19:50.335 "num_base_bdevs_discovered": 2, 00:19:50.335 "num_base_bdevs_operational": 2, 00:19:50.335 "process": { 00:19:50.335 "type": "rebuild", 00:19:50.335 "target": "spare", 00:19:50.335 "progress": { 00:19:50.335 "blocks": 22528, 00:19:50.335 "percent": 35 00:19:50.335 } 00:19:50.335 }, 00:19:50.335 "base_bdevs_list": [ 00:19:50.335 { 00:19:50.335 "name": "spare", 00:19:50.335 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:50.335 "is_configured": true, 00:19:50.335 "data_offset": 2048, 00:19:50.335 "data_size": 63488 00:19:50.335 }, 00:19:50.335 { 00:19:50.335 "name": "BaseBdev2", 00:19:50.335 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:50.335 "is_configured": true, 00:19:50.335 "data_offset": 2048, 00:19:50.335 "data_size": 63488 00:19:50.335 } 00:19:50.335 ] 00:19:50.335 }' 00:19:50.335 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.335 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.335 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.335 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.335 09:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.709 09:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.710 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.710 "name": "raid_bdev1", 00:19:51.710 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:51.710 "strip_size_kb": 0, 00:19:51.710 "state": "online", 00:19:51.710 "raid_level": "raid1", 00:19:51.710 "superblock": true, 00:19:51.710 "num_base_bdevs": 2, 00:19:51.710 "num_base_bdevs_discovered": 2, 00:19:51.710 "num_base_bdevs_operational": 2, 00:19:51.710 "process": { 00:19:51.710 "type": "rebuild", 00:19:51.710 "target": "spare", 00:19:51.710 "progress": { 00:19:51.710 "blocks": 45056, 00:19:51.710 "percent": 70 00:19:51.710 } 00:19:51.710 }, 00:19:51.710 "base_bdevs_list": [ 00:19:51.710 { 00:19:51.710 "name": "spare", 00:19:51.710 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:51.710 "is_configured": true, 00:19:51.710 "data_offset": 2048, 00:19:51.710 "data_size": 63488 00:19:51.710 }, 00:19:51.710 { 00:19:51.710 "name": "BaseBdev2", 00:19:51.710 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:51.710 "is_configured": true, 00:19:51.710 "data_offset": 2048, 00:19:51.710 "data_size": 63488 00:19:51.710 } 00:19:51.710 ] 00:19:51.710 }' 00:19:51.710 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.710 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.710 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.710 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.710 09:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:52.276 [2024-11-06 09:12:51.191343] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:52.276 [2024-11-06 09:12:51.191439] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:52.276 [2024-11-06 09:12:51.191596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.534 "name": "raid_bdev1", 00:19:52.534 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:52.534 "strip_size_kb": 0, 00:19:52.534 "state": "online", 00:19:52.534 "raid_level": "raid1", 00:19:52.534 "superblock": true, 00:19:52.534 "num_base_bdevs": 2, 00:19:52.534 "num_base_bdevs_discovered": 2, 00:19:52.534 "num_base_bdevs_operational": 2, 00:19:52.534 "base_bdevs_list": [ 00:19:52.534 { 00:19:52.534 "name": "spare", 00:19:52.534 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:52.534 "is_configured": true, 00:19:52.534 "data_offset": 2048, 00:19:52.534 "data_size": 63488 00:19:52.534 }, 00:19:52.534 { 00:19:52.534 "name": "BaseBdev2", 00:19:52.534 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:52.534 "is_configured": true, 00:19:52.534 "data_offset": 2048, 00:19:52.534 "data_size": 63488 00:19:52.534 } 00:19:52.534 ] 00:19:52.534 }' 00:19:52.534 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.791 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:52.791 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.792 "name": "raid_bdev1", 00:19:52.792 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:52.792 "strip_size_kb": 0, 00:19:52.792 "state": "online", 00:19:52.792 "raid_level": "raid1", 00:19:52.792 "superblock": true, 00:19:52.792 "num_base_bdevs": 2, 00:19:52.792 "num_base_bdevs_discovered": 2, 00:19:52.792 "num_base_bdevs_operational": 2, 00:19:52.792 "base_bdevs_list": [ 00:19:52.792 { 00:19:52.792 "name": "spare", 00:19:52.792 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:52.792 "is_configured": true, 00:19:52.792 "data_offset": 2048, 00:19:52.792 "data_size": 63488 00:19:52.792 }, 00:19:52.792 { 00:19:52.792 "name": "BaseBdev2", 00:19:52.792 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:52.792 "is_configured": true, 00:19:52.792 "data_offset": 2048, 00:19:52.792 "data_size": 63488 00:19:52.792 } 00:19:52.792 ] 00:19:52.792 }' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.792 "name": "raid_bdev1", 00:19:52.792 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:52.792 "strip_size_kb": 0, 00:19:52.792 "state": "online", 00:19:52.792 "raid_level": "raid1", 00:19:52.792 "superblock": true, 00:19:52.792 "num_base_bdevs": 2, 00:19:52.792 "num_base_bdevs_discovered": 2, 00:19:52.792 "num_base_bdevs_operational": 2, 00:19:52.792 "base_bdevs_list": [ 00:19:52.792 { 00:19:52.792 "name": "spare", 00:19:52.792 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:52.792 "is_configured": true, 00:19:52.792 "data_offset": 2048, 00:19:52.792 "data_size": 63488 00:19:52.792 }, 00:19:52.792 { 00:19:52.792 "name": "BaseBdev2", 00:19:52.792 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:52.792 "is_configured": true, 00:19:52.792 "data_offset": 2048, 00:19:52.792 "data_size": 63488 00:19:52.792 } 00:19:52.792 ] 00:19:52.792 }' 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.792 09:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.358 [2024-11-06 09:12:52.242445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.358 [2024-11-06 09:12:52.242610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.358 [2024-11-06 09:12:52.242796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.358 [2024-11-06 09:12:52.242954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.358 [2024-11-06 09:12:52.243047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:53.358 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:53.617 /dev/nbd0 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.617 1+0 records in 00:19:53.617 1+0 records out 00:19:53.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287252 s, 14.3 MB/s 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:53.617 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:53.875 /dev/nbd1 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.875 1+0 records in 00:19:53.875 1+0 records out 00:19:53.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416856 s, 9.8 MB/s 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:53.875 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:54.159 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:54.159 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.159 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:54.159 09:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.159 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:54.159 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.159 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.424 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.683 [2024-11-06 09:12:53.545219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:54.683 [2024-11-06 09:12:53.545419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.683 [2024-11-06 09:12:53.545487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:54.683 [2024-11-06 09:12:53.545503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.683 [2024-11-06 09:12:53.548114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.683 [2024-11-06 09:12:53.548157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:54.683 [2024-11-06 09:12:53.548257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:54.683 [2024-11-06 09:12:53.548321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.683 [2024-11-06 09:12:53.548477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.683 spare 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.683 [2024-11-06 09:12:53.648414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:54.683 [2024-11-06 09:12:53.648624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:54.683 [2024-11-06 09:12:53.649023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:19:54.683 [2024-11-06 09:12:53.649343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:54.683 [2024-11-06 09:12:53.649447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:54.683 [2024-11-06 09:12:53.649759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.683 "name": "raid_bdev1", 00:19:54.683 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:54.683 "strip_size_kb": 0, 00:19:54.683 "state": "online", 00:19:54.683 "raid_level": "raid1", 00:19:54.683 "superblock": true, 00:19:54.683 "num_base_bdevs": 2, 00:19:54.683 "num_base_bdevs_discovered": 2, 00:19:54.683 "num_base_bdevs_operational": 2, 00:19:54.683 "base_bdevs_list": [ 00:19:54.683 { 00:19:54.683 "name": "spare", 00:19:54.683 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:54.683 "is_configured": true, 00:19:54.683 "data_offset": 2048, 00:19:54.683 "data_size": 63488 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "name": "BaseBdev2", 00:19:54.683 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:54.683 "is_configured": true, 00:19:54.683 "data_offset": 2048, 00:19:54.683 "data_size": 63488 00:19:54.683 } 00:19:54.683 ] 00:19:54.683 }' 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.683 09:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.250 "name": "raid_bdev1", 00:19:55.250 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:55.250 "strip_size_kb": 0, 00:19:55.250 "state": "online", 00:19:55.250 "raid_level": "raid1", 00:19:55.250 "superblock": true, 00:19:55.250 "num_base_bdevs": 2, 00:19:55.250 "num_base_bdevs_discovered": 2, 00:19:55.250 "num_base_bdevs_operational": 2, 00:19:55.250 "base_bdevs_list": [ 00:19:55.250 { 00:19:55.250 "name": "spare", 00:19:55.250 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:55.250 "is_configured": true, 00:19:55.250 "data_offset": 2048, 00:19:55.250 "data_size": 63488 00:19:55.250 }, 00:19:55.250 { 00:19:55.250 "name": "BaseBdev2", 00:19:55.250 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:55.250 "is_configured": true, 00:19:55.250 "data_offset": 2048, 00:19:55.250 "data_size": 63488 00:19:55.250 } 00:19:55.250 ] 00:19:55.250 }' 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.250 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.507 [2024-11-06 09:12:54.313339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.507 "name": "raid_bdev1", 00:19:55.507 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:55.507 "strip_size_kb": 0, 00:19:55.507 "state": "online", 00:19:55.507 "raid_level": "raid1", 00:19:55.507 "superblock": true, 00:19:55.507 "num_base_bdevs": 2, 00:19:55.507 "num_base_bdevs_discovered": 1, 00:19:55.507 "num_base_bdevs_operational": 1, 00:19:55.507 "base_bdevs_list": [ 00:19:55.507 { 00:19:55.507 "name": null, 00:19:55.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.507 "is_configured": false, 00:19:55.507 "data_offset": 0, 00:19:55.507 "data_size": 63488 00:19:55.507 }, 00:19:55.507 { 00:19:55.507 "name": "BaseBdev2", 00:19:55.507 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:55.507 "is_configured": true, 00:19:55.507 "data_offset": 2048, 00:19:55.507 "data_size": 63488 00:19:55.507 } 00:19:55.507 ] 00:19:55.507 }' 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.507 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.765 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:55.765 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.765 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.765 [2024-11-06 09:12:54.784696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.765 [2024-11-06 09:12:54.785041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:55.765 [2024-11-06 09:12:54.785209] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:55.765 [2024-11-06 09:12:54.785353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.765 [2024-11-06 09:12:54.803226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:19:56.024 09:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.024 09:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:56.024 [2024-11-06 09:12:54.805550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.987 "name": "raid_bdev1", 00:19:56.987 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:56.987 "strip_size_kb": 0, 00:19:56.987 "state": "online", 00:19:56.987 "raid_level": "raid1", 00:19:56.987 "superblock": true, 00:19:56.987 "num_base_bdevs": 2, 00:19:56.987 "num_base_bdevs_discovered": 2, 00:19:56.987 "num_base_bdevs_operational": 2, 00:19:56.987 "process": { 00:19:56.987 "type": "rebuild", 00:19:56.987 "target": "spare", 00:19:56.987 "progress": { 00:19:56.987 "blocks": 20480, 00:19:56.987 "percent": 32 00:19:56.987 } 00:19:56.987 }, 00:19:56.987 "base_bdevs_list": [ 00:19:56.987 { 00:19:56.987 "name": "spare", 00:19:56.987 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:56.987 "is_configured": true, 00:19:56.987 "data_offset": 2048, 00:19:56.987 "data_size": 63488 00:19:56.987 }, 00:19:56.987 { 00:19:56.987 "name": "BaseBdev2", 00:19:56.987 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:56.987 "is_configured": true, 00:19:56.987 "data_offset": 2048, 00:19:56.987 "data_size": 63488 00:19:56.987 } 00:19:56.987 ] 00:19:56.987 }' 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.987 09:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.987 [2024-11-06 09:12:55.950232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.987 [2024-11-06 09:12:56.011552] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.987 [2024-11-06 09:12:56.011636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.987 [2024-11-06 09:12:56.011654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.987 [2024-11-06 09:12:56.011668] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.247 "name": "raid_bdev1", 00:19:57.247 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:57.247 "strip_size_kb": 0, 00:19:57.247 "state": "online", 00:19:57.247 "raid_level": "raid1", 00:19:57.247 "superblock": true, 00:19:57.247 "num_base_bdevs": 2, 00:19:57.247 "num_base_bdevs_discovered": 1, 00:19:57.247 "num_base_bdevs_operational": 1, 00:19:57.247 "base_bdevs_list": [ 00:19:57.247 { 00:19:57.247 "name": null, 00:19:57.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.247 "is_configured": false, 00:19:57.247 "data_offset": 0, 00:19:57.247 "data_size": 63488 00:19:57.247 }, 00:19:57.247 { 00:19:57.247 "name": "BaseBdev2", 00:19:57.247 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:57.247 "is_configured": true, 00:19:57.247 "data_offset": 2048, 00:19:57.247 "data_size": 63488 00:19:57.247 } 00:19:57.247 ] 00:19:57.247 }' 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.247 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.512 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.512 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.512 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.512 [2024-11-06 09:12:56.409550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.512 [2024-11-06 09:12:56.409628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.512 [2024-11-06 09:12:56.409660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:57.512 [2024-11-06 09:12:56.409679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.513 [2024-11-06 09:12:56.410198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.513 [2024-11-06 09:12:56.410244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.513 [2024-11-06 09:12:56.410389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:57.513 [2024-11-06 09:12:56.410410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.513 [2024-11-06 09:12:56.410424] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:57.513 [2024-11-06 09:12:56.410458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.513 [2024-11-06 09:12:56.427936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:57.513 spare 00:19:57.513 09:12:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.513 09:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:57.513 [2024-11-06 09:12:56.430258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.449 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.707 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.708 "name": "raid_bdev1", 00:19:58.708 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:58.708 "strip_size_kb": 0, 00:19:58.708 "state": "online", 00:19:58.708 "raid_level": "raid1", 00:19:58.708 "superblock": true, 00:19:58.708 "num_base_bdevs": 2, 00:19:58.708 "num_base_bdevs_discovered": 2, 00:19:58.708 "num_base_bdevs_operational": 2, 00:19:58.708 "process": { 00:19:58.708 "type": "rebuild", 00:19:58.708 "target": "spare", 00:19:58.708 "progress": { 00:19:58.708 "blocks": 20480, 00:19:58.708 "percent": 32 00:19:58.708 } 00:19:58.708 }, 00:19:58.708 "base_bdevs_list": [ 00:19:58.708 { 00:19:58.708 "name": "spare", 00:19:58.708 "uuid": "ddc8e9ef-c025-53d1-87f1-f3d6914fa117", 00:19:58.708 "is_configured": true, 00:19:58.708 "data_offset": 2048, 00:19:58.708 "data_size": 63488 00:19:58.708 }, 00:19:58.708 { 00:19:58.708 "name": "BaseBdev2", 00:19:58.708 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:58.708 "is_configured": true, 00:19:58.708 "data_offset": 2048, 00:19:58.708 "data_size": 63488 00:19:58.708 } 00:19:58.708 ] 00:19:58.708 }' 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.708 [2024-11-06 09:12:57.602510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.708 [2024-11-06 09:12:57.636096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:58.708 [2024-11-06 09:12:57.636321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.708 [2024-11-06 09:12:57.636351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.708 [2024-11-06 09:12:57.636363] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.708 "name": "raid_bdev1", 00:19:58.708 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:58.708 "strip_size_kb": 0, 00:19:58.708 "state": "online", 00:19:58.708 "raid_level": "raid1", 00:19:58.708 "superblock": true, 00:19:58.708 "num_base_bdevs": 2, 00:19:58.708 "num_base_bdevs_discovered": 1, 00:19:58.708 "num_base_bdevs_operational": 1, 00:19:58.708 "base_bdevs_list": [ 00:19:58.708 { 00:19:58.708 "name": null, 00:19:58.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.708 "is_configured": false, 00:19:58.708 "data_offset": 0, 00:19:58.708 "data_size": 63488 00:19:58.708 }, 00:19:58.708 { 00:19:58.708 "name": "BaseBdev2", 00:19:58.708 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:58.708 "is_configured": true, 00:19:58.708 "data_offset": 2048, 00:19:58.708 "data_size": 63488 00:19:58.708 } 00:19:58.708 ] 00:19:58.708 }' 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.708 09:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.289 "name": "raid_bdev1", 00:19:59.289 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:19:59.289 "strip_size_kb": 0, 00:19:59.289 "state": "online", 00:19:59.289 "raid_level": "raid1", 00:19:59.289 "superblock": true, 00:19:59.289 "num_base_bdevs": 2, 00:19:59.289 "num_base_bdevs_discovered": 1, 00:19:59.289 "num_base_bdevs_operational": 1, 00:19:59.289 "base_bdevs_list": [ 00:19:59.289 { 00:19:59.289 "name": null, 00:19:59.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.289 "is_configured": false, 00:19:59.289 "data_offset": 0, 00:19:59.289 "data_size": 63488 00:19:59.289 }, 00:19:59.289 { 00:19:59.289 "name": "BaseBdev2", 00:19:59.289 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:19:59.289 "is_configured": true, 00:19:59.289 "data_offset": 2048, 00:19:59.289 "data_size": 63488 00:19:59.289 } 00:19:59.289 ] 00:19:59.289 }' 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.289 [2024-11-06 09:12:58.287351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:59.289 [2024-11-06 09:12:58.287542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.289 [2024-11-06 09:12:58.287607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:59.289 [2024-11-06 09:12:58.287792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.289 [2024-11-06 09:12:58.288300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.289 [2024-11-06 09:12:58.288330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:59.289 [2024-11-06 09:12:58.288442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:59.289 [2024-11-06 09:12:58.288459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:59.289 [2024-11-06 09:12:58.288474] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:59.289 [2024-11-06 09:12:58.288487] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:59.289 BaseBdev1 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.289 09:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.700 "name": "raid_bdev1", 00:20:00.700 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:20:00.700 "strip_size_kb": 0, 00:20:00.700 "state": "online", 00:20:00.700 "raid_level": "raid1", 00:20:00.700 "superblock": true, 00:20:00.700 "num_base_bdevs": 2, 00:20:00.700 "num_base_bdevs_discovered": 1, 00:20:00.700 "num_base_bdevs_operational": 1, 00:20:00.700 "base_bdevs_list": [ 00:20:00.700 { 00:20:00.700 "name": null, 00:20:00.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.700 "is_configured": false, 00:20:00.700 "data_offset": 0, 00:20:00.700 "data_size": 63488 00:20:00.700 }, 00:20:00.700 { 00:20:00.700 "name": "BaseBdev2", 00:20:00.700 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:20:00.700 "is_configured": true, 00:20:00.700 "data_offset": 2048, 00:20:00.700 "data_size": 63488 00:20:00.700 } 00:20:00.700 ] 00:20:00.700 }' 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.700 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.980 "name": "raid_bdev1", 00:20:00.980 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:20:00.980 "strip_size_kb": 0, 00:20:00.980 "state": "online", 00:20:00.980 "raid_level": "raid1", 00:20:00.980 "superblock": true, 00:20:00.980 "num_base_bdevs": 2, 00:20:00.980 "num_base_bdevs_discovered": 1, 00:20:00.980 "num_base_bdevs_operational": 1, 00:20:00.980 "base_bdevs_list": [ 00:20:00.980 { 00:20:00.980 "name": null, 00:20:00.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.980 "is_configured": false, 00:20:00.980 "data_offset": 0, 00:20:00.980 "data_size": 63488 00:20:00.980 }, 00:20:00.980 { 00:20:00.980 "name": "BaseBdev2", 00:20:00.980 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:20:00.980 "is_configured": true, 00:20:00.980 "data_offset": 2048, 00:20:00.980 "data_size": 63488 00:20:00.980 } 00:20:00.980 ] 00:20:00.980 }' 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.980 [2024-11-06 09:12:59.893224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.980 [2024-11-06 09:12:59.893553] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:00.980 [2024-11-06 09:12:59.893685] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:00.980 request: 00:20:00.980 { 00:20:00.980 "base_bdev": "BaseBdev1", 00:20:00.980 "raid_bdev": "raid_bdev1", 00:20:00.980 "method": "bdev_raid_add_base_bdev", 00:20:00.980 "req_id": 1 00:20:00.980 } 00:20:00.980 Got JSON-RPC error response 00:20:00.980 response: 00:20:00.980 { 00:20:00.980 "code": -22, 00:20:00.980 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:00.980 } 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:00.980 09:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.911 09:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.169 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.169 "name": "raid_bdev1", 00:20:02.169 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:20:02.169 "strip_size_kb": 0, 00:20:02.169 "state": "online", 00:20:02.169 "raid_level": "raid1", 00:20:02.169 "superblock": true, 00:20:02.169 "num_base_bdevs": 2, 00:20:02.169 "num_base_bdevs_discovered": 1, 00:20:02.169 "num_base_bdevs_operational": 1, 00:20:02.169 "base_bdevs_list": [ 00:20:02.169 { 00:20:02.169 "name": null, 00:20:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.169 "is_configured": false, 00:20:02.169 "data_offset": 0, 00:20:02.169 "data_size": 63488 00:20:02.169 }, 00:20:02.169 { 00:20:02.169 "name": "BaseBdev2", 00:20:02.169 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:20:02.169 "is_configured": true, 00:20:02.169 "data_offset": 2048, 00:20:02.169 "data_size": 63488 00:20:02.169 } 00:20:02.169 ] 00:20:02.169 }' 00:20:02.169 09:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.169 09:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.428 "name": "raid_bdev1", 00:20:02.428 "uuid": "6fc996fd-adea-4535-8023-079261850c34", 00:20:02.428 "strip_size_kb": 0, 00:20:02.428 "state": "online", 00:20:02.428 "raid_level": "raid1", 00:20:02.428 "superblock": true, 00:20:02.428 "num_base_bdevs": 2, 00:20:02.428 "num_base_bdevs_discovered": 1, 00:20:02.428 "num_base_bdevs_operational": 1, 00:20:02.428 "base_bdevs_list": [ 00:20:02.428 { 00:20:02.428 "name": null, 00:20:02.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.428 "is_configured": false, 00:20:02.428 "data_offset": 0, 00:20:02.428 "data_size": 63488 00:20:02.428 }, 00:20:02.428 { 00:20:02.428 "name": "BaseBdev2", 00:20:02.428 "uuid": "1b271bf5-de4e-5f8d-abd6-17318ddaf068", 00:20:02.428 "is_configured": true, 00:20:02.428 "data_offset": 2048, 00:20:02.428 "data_size": 63488 00:20:02.428 } 00:20:02.428 ] 00:20:02.428 }' 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75466 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75466 ']' 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75466 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.428 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75466 00:20:02.688 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:02.688 killing process with pid 75466 00:20:02.688 Received shutdown signal, test time was about 60.000000 seconds 00:20:02.688 00:20:02.688 Latency(us) 00:20:02.688 [2024-11-06T09:13:01.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.688 [2024-11-06T09:13:01.728Z] =================================================================================================================== 00:20:02.688 [2024-11-06T09:13:01.728Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.688 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:02.688 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75466' 00:20:02.688 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75466 00:20:02.688 [2024-11-06 09:13:01.488163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:02.688 09:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75466 00:20:02.688 [2024-11-06 09:13:01.488341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.688 [2024-11-06 09:13:01.488410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.688 [2024-11-06 09:13:01.488425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:02.947 [2024-11-06 09:13:01.812870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.324 09:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:04.324 00:20:04.324 real 0m23.753s 00:20:04.324 user 0m28.960s 00:20:04.324 sys 0m4.093s 00:20:04.324 09:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:04.324 09:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 ************************************ 00:20:04.324 END TEST raid_rebuild_test_sb 00:20:04.324 ************************************ 00:20:04.324 09:13:03 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:20:04.324 09:13:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:04.324 09:13:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:04.324 09:13:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 ************************************ 00:20:04.324 START TEST raid_rebuild_test_io 00:20:04.324 ************************************ 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76204 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76204 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76204 ']' 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.324 09:13:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.324 [2024-11-06 09:13:03.152539] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:20:04.324 [2024-11-06 09:13:03.152855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:04.324 Zero copy mechanism will not be used. 00:20:04.324 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76204 ] 00:20:04.324 [2024-11-06 09:13:03.334900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.595 [2024-11-06 09:13:03.458921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.857 [2024-11-06 09:13:03.677226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.857 [2024-11-06 09:13:03.677490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.115 BaseBdev1_malloc 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.115 [2024-11-06 09:13:04.065868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:05.115 [2024-11-06 09:13:04.066110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.115 [2024-11-06 09:13:04.066178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:05.115 [2024-11-06 09:13:04.066287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.115 [2024-11-06 09:13:04.068951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.115 [2024-11-06 09:13:04.069125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:05.115 BaseBdev1 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.115 BaseBdev2_malloc 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.115 [2024-11-06 09:13:04.124247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:05.115 [2024-11-06 09:13:04.124492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.115 [2024-11-06 09:13:04.124557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:05.115 [2024-11-06 09:13:04.124662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.115 [2024-11-06 09:13:04.127217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.115 [2024-11-06 09:13:04.127375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:05.115 BaseBdev2 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.115 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.375 spare_malloc 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.375 spare_delay 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.375 [2024-11-06 09:13:04.211194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:05.375 [2024-11-06 09:13:04.211381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.375 [2024-11-06 09:13:04.211411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:05.375 [2024-11-06 09:13:04.211426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.375 [2024-11-06 09:13:04.213802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.375 [2024-11-06 09:13:04.213845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:05.375 spare 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.375 [2024-11-06 09:13:04.223215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.375 [2024-11-06 09:13:04.225712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.375 [2024-11-06 09:13:04.225954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:05.375 [2024-11-06 09:13:04.225996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:05.375 [2024-11-06 09:13:04.226341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:05.375 [2024-11-06 09:13:04.226548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:05.375 [2024-11-06 09:13:04.226564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:05.375 [2024-11-06 09:13:04.226739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.375 "name": "raid_bdev1", 00:20:05.375 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:05.375 "strip_size_kb": 0, 00:20:05.375 "state": "online", 00:20:05.375 "raid_level": "raid1", 00:20:05.375 "superblock": false, 00:20:05.375 "num_base_bdevs": 2, 00:20:05.375 "num_base_bdevs_discovered": 2, 00:20:05.375 "num_base_bdevs_operational": 2, 00:20:05.375 "base_bdevs_list": [ 00:20:05.375 { 00:20:05.375 "name": "BaseBdev1", 00:20:05.375 "uuid": "08fef669-c20c-521e-abcb-3e40efe49d67", 00:20:05.375 "is_configured": true, 00:20:05.375 "data_offset": 0, 00:20:05.375 "data_size": 65536 00:20:05.375 }, 00:20:05.375 { 00:20:05.375 "name": "BaseBdev2", 00:20:05.375 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:05.375 "is_configured": true, 00:20:05.375 "data_offset": 0, 00:20:05.375 "data_size": 65536 00:20:05.375 } 00:20:05.375 ] 00:20:05.375 }' 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.375 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.635 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.635 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:05.635 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.635 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.635 [2024-11-06 09:13:04.634950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.635 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.635 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.910 [2024-11-06 09:13:04.726498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.910 "name": "raid_bdev1", 00:20:05.910 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:05.910 "strip_size_kb": 0, 00:20:05.910 "state": "online", 00:20:05.910 "raid_level": "raid1", 00:20:05.910 "superblock": false, 00:20:05.910 "num_base_bdevs": 2, 00:20:05.910 "num_base_bdevs_discovered": 1, 00:20:05.910 "num_base_bdevs_operational": 1, 00:20:05.910 "base_bdevs_list": [ 00:20:05.910 { 00:20:05.910 "name": null, 00:20:05.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.910 "is_configured": false, 00:20:05.910 "data_offset": 0, 00:20:05.910 "data_size": 65536 00:20:05.910 }, 00:20:05.910 { 00:20:05.910 "name": "BaseBdev2", 00:20:05.910 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:05.910 "is_configured": true, 00:20:05.910 "data_offset": 0, 00:20:05.910 "data_size": 65536 00:20:05.910 } 00:20:05.910 ] 00:20:05.910 }' 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.910 09:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.910 [2024-11-06 09:13:04.835019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:05.910 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:05.910 Zero copy mechanism will not be used. 00:20:05.910 Running I/O for 60 seconds... 00:20:06.188 09:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:06.188 09:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.188 09:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.188 [2024-11-06 09:13:05.209905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.446 09:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.446 09:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:06.446 [2024-11-06 09:13:05.266117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:06.446 [2024-11-06 09:13:05.268555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.446 [2024-11-06 09:13:05.382523] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:06.446 [2024-11-06 09:13:05.383111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:06.704 [2024-11-06 09:13:05.586633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:06.704 [2024-11-06 09:13:05.586971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:06.962 168.00 IOPS, 504.00 MiB/s [2024-11-06T09:13:06.002Z] [2024-11-06 09:13:05.841375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:06.962 [2024-11-06 09:13:05.841848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:07.220 [2024-11-06 09:13:06.071088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:07.220 [2024-11-06 09:13:06.071467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.220 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.479 "name": "raid_bdev1", 00:20:07.479 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:07.479 "strip_size_kb": 0, 00:20:07.479 "state": "online", 00:20:07.479 "raid_level": "raid1", 00:20:07.479 "superblock": false, 00:20:07.479 "num_base_bdevs": 2, 00:20:07.479 "num_base_bdevs_discovered": 2, 00:20:07.479 "num_base_bdevs_operational": 2, 00:20:07.479 "process": { 00:20:07.479 "type": "rebuild", 00:20:07.479 "target": "spare", 00:20:07.479 "progress": { 00:20:07.479 "blocks": 10240, 00:20:07.479 "percent": 15 00:20:07.479 } 00:20:07.479 }, 00:20:07.479 "base_bdevs_list": [ 00:20:07.479 { 00:20:07.479 "name": "spare", 00:20:07.479 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:07.479 "is_configured": true, 00:20:07.479 "data_offset": 0, 00:20:07.479 "data_size": 65536 00:20:07.479 }, 00:20:07.479 { 00:20:07.479 "name": "BaseBdev2", 00:20:07.479 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:07.479 "is_configured": true, 00:20:07.479 "data_offset": 0, 00:20:07.479 "data_size": 65536 00:20:07.479 } 00:20:07.479 ] 00:20:07.479 }' 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 [2024-11-06 09:13:06.400691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.479 [2024-11-06 09:13:06.515851] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:07.737 [2024-11-06 09:13:06.525164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.737 [2024-11-06 09:13:06.525421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.737 [2024-11-06 09:13:06.525456] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:07.737 [2024-11-06 09:13:06.570214] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.737 "name": "raid_bdev1", 00:20:07.737 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:07.737 "strip_size_kb": 0, 00:20:07.737 "state": "online", 00:20:07.737 "raid_level": "raid1", 00:20:07.737 "superblock": false, 00:20:07.737 "num_base_bdevs": 2, 00:20:07.737 "num_base_bdevs_discovered": 1, 00:20:07.737 "num_base_bdevs_operational": 1, 00:20:07.737 "base_bdevs_list": [ 00:20:07.737 { 00:20:07.737 "name": null, 00:20:07.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.737 "is_configured": false, 00:20:07.737 "data_offset": 0, 00:20:07.737 "data_size": 65536 00:20:07.737 }, 00:20:07.737 { 00:20:07.737 "name": "BaseBdev2", 00:20:07.737 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:07.737 "is_configured": true, 00:20:07.737 "data_offset": 0, 00:20:07.737 "data_size": 65536 00:20:07.737 } 00:20:07.737 ] 00:20:07.737 }' 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.737 09:13:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.995 139.50 IOPS, 418.50 MiB/s [2024-11-06T09:13:07.035Z] 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.995 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.995 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.995 09:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.995 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.995 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.995 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.995 09:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.995 09:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.253 09:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.253 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.253 "name": "raid_bdev1", 00:20:08.253 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:08.253 "strip_size_kb": 0, 00:20:08.253 "state": "online", 00:20:08.253 "raid_level": "raid1", 00:20:08.253 "superblock": false, 00:20:08.253 "num_base_bdevs": 2, 00:20:08.253 "num_base_bdevs_discovered": 1, 00:20:08.253 "num_base_bdevs_operational": 1, 00:20:08.253 "base_bdevs_list": [ 00:20:08.253 { 00:20:08.253 "name": null, 00:20:08.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.253 "is_configured": false, 00:20:08.253 "data_offset": 0, 00:20:08.253 "data_size": 65536 00:20:08.254 }, 00:20:08.254 { 00:20:08.254 "name": "BaseBdev2", 00:20:08.254 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:08.254 "is_configured": true, 00:20:08.254 "data_offset": 0, 00:20:08.254 "data_size": 65536 00:20:08.254 } 00:20:08.254 ] 00:20:08.254 }' 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.254 [2024-11-06 09:13:07.163260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.254 09:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:08.254 [2024-11-06 09:13:07.216149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:08.254 [2024-11-06 09:13:07.218782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.515 [2024-11-06 09:13:07.328377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.515 [2024-11-06 09:13:07.328982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.515 [2024-11-06 09:13:07.532103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.515 [2024-11-06 09:13:07.533033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:09.082 154.67 IOPS, 464.00 MiB/s [2024-11-06T09:13:08.122Z] [2024-11-06 09:13:07.911123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.340 "name": "raid_bdev1", 00:20:09.340 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:09.340 "strip_size_kb": 0, 00:20:09.340 "state": "online", 00:20:09.340 "raid_level": "raid1", 00:20:09.340 "superblock": false, 00:20:09.340 "num_base_bdevs": 2, 00:20:09.340 "num_base_bdevs_discovered": 2, 00:20:09.340 "num_base_bdevs_operational": 2, 00:20:09.340 "process": { 00:20:09.340 "type": "rebuild", 00:20:09.340 "target": "spare", 00:20:09.340 "progress": { 00:20:09.340 "blocks": 12288, 00:20:09.340 "percent": 18 00:20:09.340 } 00:20:09.340 }, 00:20:09.340 "base_bdevs_list": [ 00:20:09.340 { 00:20:09.340 "name": "spare", 00:20:09.340 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:09.340 "is_configured": true, 00:20:09.340 "data_offset": 0, 00:20:09.340 "data_size": 65536 00:20:09.340 }, 00:20:09.340 { 00:20:09.340 "name": "BaseBdev2", 00:20:09.340 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:09.340 "is_configured": true, 00:20:09.340 "data_offset": 0, 00:20:09.340 "data_size": 65536 00:20:09.340 } 00:20:09.340 ] 00:20:09.340 }' 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.340 [2024-11-06 09:13:08.275896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.340 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.341 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.599 "name": "raid_bdev1", 00:20:09.599 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:09.599 "strip_size_kb": 0, 00:20:09.599 "state": "online", 00:20:09.599 "raid_level": "raid1", 00:20:09.599 "superblock": false, 00:20:09.599 "num_base_bdevs": 2, 00:20:09.599 "num_base_bdevs_discovered": 2, 00:20:09.599 "num_base_bdevs_operational": 2, 00:20:09.599 "process": { 00:20:09.599 "type": "rebuild", 00:20:09.599 "target": "spare", 00:20:09.599 "progress": { 00:20:09.599 "blocks": 14336, 00:20:09.599 "percent": 21 00:20:09.599 } 00:20:09.599 }, 00:20:09.599 "base_bdevs_list": [ 00:20:09.599 { 00:20:09.599 "name": "spare", 00:20:09.599 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:09.599 "is_configured": true, 00:20:09.599 "data_offset": 0, 00:20:09.599 "data_size": 65536 00:20:09.599 }, 00:20:09.599 { 00:20:09.599 "name": "BaseBdev2", 00:20:09.599 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:09.599 "is_configured": true, 00:20:09.599 "data_offset": 0, 00:20:09.599 "data_size": 65536 00:20:09.599 } 00:20:09.599 ] 00:20:09.599 }' 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.599 [2024-11-06 09:13:08.425926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.599 09:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.858 [2024-11-06 09:13:08.757919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.858 136.75 IOPS, 410.25 MiB/s [2024-11-06T09:13:08.898Z] [2024-11-06 09:13:08.860814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:09.858 [2024-11-06 09:13:08.861412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:10.425 [2024-11-06 09:13:09.455500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.683 "name": "raid_bdev1", 00:20:10.683 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:10.683 "strip_size_kb": 0, 00:20:10.683 "state": "online", 00:20:10.683 "raid_level": "raid1", 00:20:10.683 "superblock": false, 00:20:10.683 "num_base_bdevs": 2, 00:20:10.683 "num_base_bdevs_discovered": 2, 00:20:10.683 "num_base_bdevs_operational": 2, 00:20:10.683 "process": { 00:20:10.683 "type": "rebuild", 00:20:10.683 "target": "spare", 00:20:10.683 "progress": { 00:20:10.683 "blocks": 32768, 00:20:10.683 "percent": 50 00:20:10.683 } 00:20:10.683 }, 00:20:10.683 "base_bdevs_list": [ 00:20:10.683 { 00:20:10.683 "name": "spare", 00:20:10.683 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:10.683 "is_configured": true, 00:20:10.683 "data_offset": 0, 00:20:10.683 "data_size": 65536 00:20:10.683 }, 00:20:10.683 { 00:20:10.683 "name": "BaseBdev2", 00:20:10.683 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:10.683 "is_configured": true, 00:20:10.683 "data_offset": 0, 00:20:10.683 "data_size": 65536 00:20:10.683 } 00:20:10.683 ] 00:20:10.683 }' 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.683 09:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.199 119.20 IOPS, 357.60 MiB/s [2024-11-06T09:13:10.239Z] [2024-11-06 09:13:10.040075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:11.457 [2024-11-06 09:13:10.367067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:11.457 [2024-11-06 09:13:10.367691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.717 "name": "raid_bdev1", 00:20:11.717 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:11.717 "strip_size_kb": 0, 00:20:11.717 "state": "online", 00:20:11.717 "raid_level": "raid1", 00:20:11.717 "superblock": false, 00:20:11.717 "num_base_bdevs": 2, 00:20:11.717 "num_base_bdevs_discovered": 2, 00:20:11.717 "num_base_bdevs_operational": 2, 00:20:11.717 "process": { 00:20:11.717 "type": "rebuild", 00:20:11.717 "target": "spare", 00:20:11.717 "progress": { 00:20:11.717 "blocks": 49152, 00:20:11.717 "percent": 75 00:20:11.717 } 00:20:11.717 }, 00:20:11.717 "base_bdevs_list": [ 00:20:11.717 { 00:20:11.717 "name": "spare", 00:20:11.717 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:11.717 "is_configured": true, 00:20:11.717 "data_offset": 0, 00:20:11.717 "data_size": 65536 00:20:11.717 }, 00:20:11.717 { 00:20:11.717 "name": "BaseBdev2", 00:20:11.717 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:11.717 "is_configured": true, 00:20:11.717 "data_offset": 0, 00:20:11.717 "data_size": 65536 00:20:11.717 } 00:20:11.717 ] 00:20:11.717 }' 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.717 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.977 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.977 09:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.977 [2024-11-06 09:13:10.794208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:12.236 105.17 IOPS, 315.50 MiB/s [2024-11-06T09:13:11.276Z] [2024-11-06 09:13:11.019355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:12.236 [2024-11-06 09:13:11.242445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:12.800 [2024-11-06 09:13:11.574534] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:12.800 [2024-11-06 09:13:11.680504] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:12.800 [2024-11-06 09:13:11.683654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.800 "name": "raid_bdev1", 00:20:12.800 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:12.800 "strip_size_kb": 0, 00:20:12.800 "state": "online", 00:20:12.800 "raid_level": "raid1", 00:20:12.800 "superblock": false, 00:20:12.800 "num_base_bdevs": 2, 00:20:12.800 "num_base_bdevs_discovered": 2, 00:20:12.800 "num_base_bdevs_operational": 2, 00:20:12.800 "base_bdevs_list": [ 00:20:12.800 { 00:20:12.800 "name": "spare", 00:20:12.800 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:12.800 "is_configured": true, 00:20:12.800 "data_offset": 0, 00:20:12.800 "data_size": 65536 00:20:12.800 }, 00:20:12.800 { 00:20:12.800 "name": "BaseBdev2", 00:20:12.800 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:12.800 "is_configured": true, 00:20:12.800 "data_offset": 0, 00:20:12.800 "data_size": 65536 00:20:12.800 } 00:20:12.800 ] 00:20:12.800 }' 00:20:12.800 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.059 94.43 IOPS, 283.29 MiB/s [2024-11-06T09:13:12.099Z] 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.059 "name": "raid_bdev1", 00:20:13.059 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:13.059 "strip_size_kb": 0, 00:20:13.059 "state": "online", 00:20:13.059 "raid_level": "raid1", 00:20:13.059 "superblock": false, 00:20:13.059 "num_base_bdevs": 2, 00:20:13.059 "num_base_bdevs_discovered": 2, 00:20:13.059 "num_base_bdevs_operational": 2, 00:20:13.059 "base_bdevs_list": [ 00:20:13.059 { 00:20:13.059 "name": "spare", 00:20:13.059 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:13.059 "is_configured": true, 00:20:13.059 "data_offset": 0, 00:20:13.059 "data_size": 65536 00:20:13.059 }, 00:20:13.059 { 00:20:13.059 "name": "BaseBdev2", 00:20:13.059 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:13.059 "is_configured": true, 00:20:13.059 "data_offset": 0, 00:20:13.059 "data_size": 65536 00:20:13.059 } 00:20:13.059 ] 00:20:13.059 }' 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.059 09:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.059 "name": "raid_bdev1", 00:20:13.059 "uuid": "5e4c85f5-8f70-4117-9b68-6b53ee602a17", 00:20:13.059 "strip_size_kb": 0, 00:20:13.059 "state": "online", 00:20:13.059 "raid_level": "raid1", 00:20:13.059 "superblock": false, 00:20:13.059 "num_base_bdevs": 2, 00:20:13.059 "num_base_bdevs_discovered": 2, 00:20:13.059 "num_base_bdevs_operational": 2, 00:20:13.059 "base_bdevs_list": [ 00:20:13.059 { 00:20:13.059 "name": "spare", 00:20:13.059 "uuid": "41dd5e6c-be94-5302-8b81-93cb0cb950c7", 00:20:13.059 "is_configured": true, 00:20:13.059 "data_offset": 0, 00:20:13.059 "data_size": 65536 00:20:13.059 }, 00:20:13.059 { 00:20:13.059 "name": "BaseBdev2", 00:20:13.059 "uuid": "29de9155-086b-54f1-8c3b-ae4ca8b42189", 00:20:13.059 "is_configured": true, 00:20:13.059 "data_offset": 0, 00:20:13.059 "data_size": 65536 00:20:13.059 } 00:20:13.059 ] 00:20:13.059 }' 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.059 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.626 [2024-11-06 09:13:12.499962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.626 [2024-11-06 09:13:12.500163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.626 00:20:13.626 Latency(us) 00:20:13.626 [2024-11-06T09:13:12.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.626 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:13.626 raid_bdev1 : 7.72 88.70 266.10 0.00 0.00 16006.08 324.06 133914.53 00:20:13.626 [2024-11-06T09:13:12.666Z] =================================================================================================================== 00:20:13.626 [2024-11-06T09:13:12.666Z] Total : 88.70 266.10 0.00 0.00 16006.08 324.06 133914.53 00:20:13.626 [2024-11-06 09:13:12.572092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.626 [2024-11-06 09:13:12.572162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.626 [2024-11-06 09:13:12.572257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.626 [2024-11-06 09:13:12.572289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:13.626 { 00:20:13.626 "results": [ 00:20:13.626 { 00:20:13.626 "job": "raid_bdev1", 00:20:13.626 "core_mask": "0x1", 00:20:13.626 "workload": "randrw", 00:20:13.626 "percentage": 50, 00:20:13.626 "status": "finished", 00:20:13.626 "queue_depth": 2, 00:20:13.626 "io_size": 3145728, 00:20:13.626 "runtime": 7.722695, 00:20:13.626 "iops": 88.6996055133603, 00:20:13.626 "mibps": 266.09881654008086, 00:20:13.626 "io_failed": 0, 00:20:13.626 "io_timeout": 0, 00:20:13.626 "avg_latency_us": 16006.079671679418, 00:20:13.626 "min_latency_us": 324.06104417670684, 00:20:13.626 "max_latency_us": 133914.52530120482 00:20:13.626 } 00:20:13.626 ], 00:20:13.626 "core_count": 1 00:20:13.626 } 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:13.626 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:13.886 /dev/nbd0 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.886 1+0 records in 00:20:13.886 1+0 records out 00:20:13.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045662 s, 9.0 MB/s 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:13.886 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.145 09:13:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:14.145 /dev/nbd1 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.404 1+0 records in 00:20:14.404 1+0 records out 00:20:14.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471841 s, 8.7 MB/s 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.404 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.662 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76204 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76204 ']' 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76204 00:20:14.920 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76204 00:20:15.184 killing process with pid 76204 00:20:15.184 Received shutdown signal, test time was about 9.165451 seconds 00:20:15.184 00:20:15.184 Latency(us) 00:20:15.184 [2024-11-06T09:13:14.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.184 [2024-11-06T09:13:14.224Z] =================================================================================================================== 00:20:15.184 [2024-11-06T09:13:14.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76204' 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76204 00:20:15.184 09:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76204 00:20:15.184 [2024-11-06 09:13:13.988222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:15.455 [2024-11-06 09:13:14.239732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:16.832 00:20:16.832 real 0m12.440s 00:20:16.832 user 0m15.555s 00:20:16.832 sys 0m1.702s 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.832 ************************************ 00:20:16.832 END TEST raid_rebuild_test_io 00:20:16.832 ************************************ 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.832 09:13:15 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:20:16.832 09:13:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:16.832 09:13:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:16.832 09:13:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.832 ************************************ 00:20:16.832 START TEST raid_rebuild_test_sb_io 00:20:16.832 ************************************ 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76583 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76583 00:20:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 76583 ']' 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.832 09:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.832 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:16.832 Zero copy mechanism will not be used. 00:20:16.832 [2024-11-06 09:13:15.677734] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:20:16.832 [2024-11-06 09:13:15.677896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76583 ] 00:20:16.832 [2024-11-06 09:13:15.861677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.118 [2024-11-06 09:13:15.991795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.377 [2024-11-06 09:13:16.218010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.377 [2024-11-06 09:13:16.218088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.635 BaseBdev1_malloc 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.635 [2024-11-06 09:13:16.602648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:17.635 [2024-11-06 09:13:16.602875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.635 [2024-11-06 09:13:16.602913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:17.635 [2024-11-06 09:13:16.602930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.635 [2024-11-06 09:13:16.605589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.635 [2024-11-06 09:13:16.605640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:17.635 BaseBdev1 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.635 BaseBdev2_malloc 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.635 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.635 [2024-11-06 09:13:16.662356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:17.635 [2024-11-06 09:13:16.662426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.635 [2024-11-06 09:13:16.662450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:17.635 [2024-11-06 09:13:16.662468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.636 [2024-11-06 09:13:16.665009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.636 [2024-11-06 09:13:16.665204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:17.636 BaseBdev2 00:20:17.636 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.636 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:17.636 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.636 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.894 spare_malloc 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.894 spare_delay 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.894 [2024-11-06 09:13:16.741632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.894 [2024-11-06 09:13:16.741700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.894 [2024-11-06 09:13:16.741727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:17.894 [2024-11-06 09:13:16.741745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.894 [2024-11-06 09:13:16.744356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.894 [2024-11-06 09:13:16.744536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.894 spare 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.894 [2024-11-06 09:13:16.749692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.894 [2024-11-06 09:13:16.751951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.894 [2024-11-06 09:13:16.752264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:17.894 [2024-11-06 09:13:16.752305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:17.894 [2024-11-06 09:13:16.752602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:17.894 [2024-11-06 09:13:16.752792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:17.894 [2024-11-06 09:13:16.752803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:17.894 [2024-11-06 09:13:16.752964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.894 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.894 "name": "raid_bdev1", 00:20:17.894 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:17.894 "strip_size_kb": 0, 00:20:17.895 "state": "online", 00:20:17.895 "raid_level": "raid1", 00:20:17.895 "superblock": true, 00:20:17.895 "num_base_bdevs": 2, 00:20:17.895 "num_base_bdevs_discovered": 2, 00:20:17.895 "num_base_bdevs_operational": 2, 00:20:17.895 "base_bdevs_list": [ 00:20:17.895 { 00:20:17.895 "name": "BaseBdev1", 00:20:17.895 "uuid": "ef8dab06-ef4f-5e17-b500-2ebc9f75ab98", 00:20:17.895 "is_configured": true, 00:20:17.895 "data_offset": 2048, 00:20:17.895 "data_size": 63488 00:20:17.895 }, 00:20:17.895 { 00:20:17.895 "name": "BaseBdev2", 00:20:17.895 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:17.895 "is_configured": true, 00:20:17.895 "data_offset": 2048, 00:20:17.895 "data_size": 63488 00:20:17.895 } 00:20:17.895 ] 00:20:17.895 }' 00:20:17.895 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.895 09:13:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.460 [2024-11-06 09:13:17.237428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.460 [2024-11-06 09:13:17.328942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.460 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.461 "name": "raid_bdev1", 00:20:18.461 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:18.461 "strip_size_kb": 0, 00:20:18.461 "state": "online", 00:20:18.461 "raid_level": "raid1", 00:20:18.461 "superblock": true, 00:20:18.461 "num_base_bdevs": 2, 00:20:18.461 "num_base_bdevs_discovered": 1, 00:20:18.461 "num_base_bdevs_operational": 1, 00:20:18.461 "base_bdevs_list": [ 00:20:18.461 { 00:20:18.461 "name": null, 00:20:18.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.461 "is_configured": false, 00:20:18.461 "data_offset": 0, 00:20:18.461 "data_size": 63488 00:20:18.461 }, 00:20:18.461 { 00:20:18.461 "name": "BaseBdev2", 00:20:18.461 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:18.461 "is_configured": true, 00:20:18.461 "data_offset": 2048, 00:20:18.461 "data_size": 63488 00:20:18.461 } 00:20:18.461 ] 00:20:18.461 }' 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.461 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.461 [2024-11-06 09:13:17.437702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:18.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:18.461 Zero copy mechanism will not be used. 00:20:18.461 Running I/O for 60 seconds... 00:20:19.027 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.027 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.027 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.027 [2024-11-06 09:13:17.811997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.027 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.027 09:13:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:19.027 [2024-11-06 09:13:17.880975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:19.027 [2024-11-06 09:13:17.883285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.027 [2024-11-06 09:13:18.006635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:19.286 [2024-11-06 09:13:18.121802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:19.286 [2024-11-06 09:13:18.122467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:19.543 [2024-11-06 09:13:18.359024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:19.543 188.00 IOPS, 564.00 MiB/s [2024-11-06T09:13:18.583Z] [2024-11-06 09:13:18.479161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:19.543 [2024-11-06 09:13:18.479645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:19.802 [2024-11-06 09:13:18.821537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:19.802 [2024-11-06 09:13:18.822096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.061 "name": "raid_bdev1", 00:20:20.061 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:20.061 "strip_size_kb": 0, 00:20:20.061 "state": "online", 00:20:20.061 "raid_level": "raid1", 00:20:20.061 "superblock": true, 00:20:20.061 "num_base_bdevs": 2, 00:20:20.061 "num_base_bdevs_discovered": 2, 00:20:20.061 "num_base_bdevs_operational": 2, 00:20:20.061 "process": { 00:20:20.061 "type": "rebuild", 00:20:20.061 "target": "spare", 00:20:20.061 "progress": { 00:20:20.061 "blocks": 14336, 00:20:20.061 "percent": 22 00:20:20.061 } 00:20:20.061 }, 00:20:20.061 "base_bdevs_list": [ 00:20:20.061 { 00:20:20.061 "name": "spare", 00:20:20.061 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:20.061 "is_configured": true, 00:20:20.061 "data_offset": 2048, 00:20:20.061 "data_size": 63488 00:20:20.061 }, 00:20:20.061 { 00:20:20.061 "name": "BaseBdev2", 00:20:20.061 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:20.061 "is_configured": true, 00:20:20.061 "data_offset": 2048, 00:20:20.061 "data_size": 63488 00:20:20.061 } 00:20:20.061 ] 00:20:20.061 }' 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.061 09:13:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.061 [2024-11-06 09:13:18.975259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:20.061 [2024-11-06 09:13:19.055682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:20.061 [2024-11-06 09:13:19.055957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:20.321 [2024-11-06 09:13:19.157483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:20.321 [2024-11-06 09:13:19.165604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.321 [2024-11-06 09:13:19.165642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:20.321 [2024-11-06 09:13:19.165659] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:20.321 [2024-11-06 09:13:19.209332] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.321 "name": "raid_bdev1", 00:20:20.321 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:20.321 "strip_size_kb": 0, 00:20:20.321 "state": "online", 00:20:20.321 "raid_level": "raid1", 00:20:20.321 "superblock": true, 00:20:20.321 "num_base_bdevs": 2, 00:20:20.321 "num_base_bdevs_discovered": 1, 00:20:20.321 "num_base_bdevs_operational": 1, 00:20:20.321 "base_bdevs_list": [ 00:20:20.321 { 00:20:20.321 "name": null, 00:20:20.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.321 "is_configured": false, 00:20:20.321 "data_offset": 0, 00:20:20.321 "data_size": 63488 00:20:20.321 }, 00:20:20.321 { 00:20:20.321 "name": "BaseBdev2", 00:20:20.321 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:20.321 "is_configured": true, 00:20:20.321 "data_offset": 2048, 00:20:20.321 "data_size": 63488 00:20:20.321 } 00:20:20.321 ] 00:20:20.321 }' 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.321 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.838 153.50 IOPS, 460.50 MiB/s [2024-11-06T09:13:19.878Z] 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:20.838 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.838 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:20.838 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.839 "name": "raid_bdev1", 00:20:20.839 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:20.839 "strip_size_kb": 0, 00:20:20.839 "state": "online", 00:20:20.839 "raid_level": "raid1", 00:20:20.839 "superblock": true, 00:20:20.839 "num_base_bdevs": 2, 00:20:20.839 "num_base_bdevs_discovered": 1, 00:20:20.839 "num_base_bdevs_operational": 1, 00:20:20.839 "base_bdevs_list": [ 00:20:20.839 { 00:20:20.839 "name": null, 00:20:20.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.839 "is_configured": false, 00:20:20.839 "data_offset": 0, 00:20:20.839 "data_size": 63488 00:20:20.839 }, 00:20:20.839 { 00:20:20.839 "name": "BaseBdev2", 00:20:20.839 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:20.839 "is_configured": true, 00:20:20.839 "data_offset": 2048, 00:20:20.839 "data_size": 63488 00:20:20.839 } 00:20:20.839 ] 00:20:20.839 }' 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.839 [2024-11-06 09:13:19.813994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.839 09:13:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:20.839 [2024-11-06 09:13:19.865485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:20.839 [2024-11-06 09:13:19.867656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.097 [2024-11-06 09:13:19.975658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:21.097 [2024-11-06 09:13:19.976468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:21.355 [2024-11-06 09:13:20.185049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:21.355 [2024-11-06 09:13:20.185438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:21.613 [2024-11-06 09:13:20.410333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:21.613 [2024-11-06 09:13:20.416545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:21.872 159.00 IOPS, 477.00 MiB/s [2024-11-06T09:13:20.912Z] [2024-11-06 09:13:20.653532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.872 [2024-11-06 09:13:20.886301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.872 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.872 "name": "raid_bdev1", 00:20:21.872 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:21.872 "strip_size_kb": 0, 00:20:21.872 "state": "online", 00:20:21.872 "raid_level": "raid1", 00:20:21.872 "superblock": true, 00:20:21.872 "num_base_bdevs": 2, 00:20:21.872 "num_base_bdevs_discovered": 2, 00:20:21.872 "num_base_bdevs_operational": 2, 00:20:21.872 "process": { 00:20:21.872 "type": "rebuild", 00:20:21.872 "target": "spare", 00:20:21.872 "progress": { 00:20:21.872 "blocks": 12288, 00:20:21.872 "percent": 19 00:20:21.872 } 00:20:21.872 }, 00:20:21.872 "base_bdevs_list": [ 00:20:21.872 { 00:20:21.872 "name": "spare", 00:20:21.872 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:21.872 "is_configured": true, 00:20:21.872 "data_offset": 2048, 00:20:21.872 "data_size": 63488 00:20:21.872 }, 00:20:21.872 { 00:20:21.872 "name": "BaseBdev2", 00:20:21.872 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:21.872 "is_configured": true, 00:20:21.872 "data_offset": 2048, 00:20:21.872 "data_size": 63488 00:20:21.872 } 00:20:21.872 ] 00:20:21.872 }' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:22.131 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.131 09:13:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:22.131 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.131 [2024-11-06 09:13:21.027394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:22.131 [2024-11-06 09:13:21.027739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:22.131 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.131 "name": "raid_bdev1", 00:20:22.131 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:22.131 "strip_size_kb": 0, 00:20:22.131 "state": "online", 00:20:22.131 "raid_level": "raid1", 00:20:22.131 "superblock": true, 00:20:22.131 "num_base_bdevs": 2, 00:20:22.131 "num_base_bdevs_discovered": 2, 00:20:22.131 "num_base_bdevs_operational": 2, 00:20:22.131 "process": { 00:20:22.131 "type": "rebuild", 00:20:22.131 "target": "spare", 00:20:22.131 "progress": { 00:20:22.131 "blocks": 14336, 00:20:22.131 "percent": 22 00:20:22.131 } 00:20:22.131 }, 00:20:22.131 "base_bdevs_list": [ 00:20:22.131 { 00:20:22.131 "name": "spare", 00:20:22.131 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:22.131 "is_configured": true, 00:20:22.131 "data_offset": 2048, 00:20:22.131 "data_size": 63488 00:20:22.131 }, 00:20:22.131 { 00:20:22.132 "name": "BaseBdev2", 00:20:22.132 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:22.132 "is_configured": true, 00:20:22.132 "data_offset": 2048, 00:20:22.132 "data_size": 63488 00:20:22.132 } 00:20:22.132 ] 00:20:22.132 }' 00:20:22.132 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.132 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.132 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.132 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.132 09:13:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.700 136.25 IOPS, 408.75 MiB/s [2024-11-06T09:13:21.740Z] [2024-11-06 09:13:21.703853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:23.265 [2024-11-06 09:13:22.149723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:23.265 [2024-11-06 09:13:22.150264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.265 "name": "raid_bdev1", 00:20:23.265 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:23.265 "strip_size_kb": 0, 00:20:23.265 "state": "online", 00:20:23.265 "raid_level": "raid1", 00:20:23.265 "superblock": true, 00:20:23.265 "num_base_bdevs": 2, 00:20:23.265 "num_base_bdevs_discovered": 2, 00:20:23.265 "num_base_bdevs_operational": 2, 00:20:23.265 "process": { 00:20:23.265 "type": "rebuild", 00:20:23.265 "target": "spare", 00:20:23.265 "progress": { 00:20:23.265 "blocks": 32768, 00:20:23.265 "percent": 51 00:20:23.265 } 00:20:23.265 }, 00:20:23.265 "base_bdevs_list": [ 00:20:23.265 { 00:20:23.265 "name": "spare", 00:20:23.265 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:23.265 "is_configured": true, 00:20:23.265 "data_offset": 2048, 00:20:23.265 "data_size": 63488 00:20:23.265 }, 00:20:23.265 { 00:20:23.265 "name": "BaseBdev2", 00:20:23.265 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:23.265 "is_configured": true, 00:20:23.265 "data_offset": 2048, 00:20:23.265 "data_size": 63488 00:20:23.265 } 00:20:23.265 ] 00:20:23.265 }' 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.265 09:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:23.522 [2024-11-06 09:13:22.371841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:23.780 118.60 IOPS, 355.80 MiB/s [2024-11-06T09:13:22.820Z] [2024-11-06 09:13:22.706201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:24.347 [2024-11-06 09:13:23.130260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.347 "name": "raid_bdev1", 00:20:24.347 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:24.347 "strip_size_kb": 0, 00:20:24.347 "state": "online", 00:20:24.347 "raid_level": "raid1", 00:20:24.347 "superblock": true, 00:20:24.347 "num_base_bdevs": 2, 00:20:24.347 "num_base_bdevs_discovered": 2, 00:20:24.347 "num_base_bdevs_operational": 2, 00:20:24.347 "process": { 00:20:24.347 "type": "rebuild", 00:20:24.347 "target": "spare", 00:20:24.347 "progress": { 00:20:24.347 "blocks": 47104, 00:20:24.347 "percent": 74 00:20:24.347 } 00:20:24.347 }, 00:20:24.347 "base_bdevs_list": [ 00:20:24.347 { 00:20:24.347 "name": "spare", 00:20:24.347 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:24.347 "is_configured": true, 00:20:24.347 "data_offset": 2048, 00:20:24.347 "data_size": 63488 00:20:24.347 }, 00:20:24.347 { 00:20:24.347 "name": "BaseBdev2", 00:20:24.347 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:24.347 "is_configured": true, 00:20:24.347 "data_offset": 2048, 00:20:24.347 "data_size": 63488 00:20:24.347 } 00:20:24.347 ] 00:20:24.347 }' 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.347 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.606 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.606 09:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:24.865 103.83 IOPS, 311.50 MiB/s [2024-11-06T09:13:23.905Z] [2024-11-06 09:13:23.789432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:24.865 [2024-11-06 09:13:23.790065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:25.123 [2024-11-06 09:13:23.910761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:25.382 [2024-11-06 09:13:24.231483] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:25.382 [2024-11-06 09:13:24.336992] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:25.382 [2024-11-06 09:13:24.340097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:25.639 93.57 IOPS, 280.71 MiB/s [2024-11-06T09:13:24.679Z] 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.639 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.639 "name": "raid_bdev1", 00:20:25.639 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:25.639 "strip_size_kb": 0, 00:20:25.639 "state": "online", 00:20:25.639 "raid_level": "raid1", 00:20:25.639 "superblock": true, 00:20:25.639 "num_base_bdevs": 2, 00:20:25.639 "num_base_bdevs_discovered": 2, 00:20:25.639 "num_base_bdevs_operational": 2, 00:20:25.639 "base_bdevs_list": [ 00:20:25.639 { 00:20:25.639 "name": "spare", 00:20:25.639 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:25.639 "is_configured": true, 00:20:25.639 "data_offset": 2048, 00:20:25.639 "data_size": 63488 00:20:25.639 }, 00:20:25.639 { 00:20:25.639 "name": "BaseBdev2", 00:20:25.639 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:25.639 "is_configured": true, 00:20:25.639 "data_offset": 2048, 00:20:25.640 "data_size": 63488 00:20:25.640 } 00:20:25.640 ] 00:20:25.640 }' 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.640 "name": "raid_bdev1", 00:20:25.640 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:25.640 "strip_size_kb": 0, 00:20:25.640 "state": "online", 00:20:25.640 "raid_level": "raid1", 00:20:25.640 "superblock": true, 00:20:25.640 "num_base_bdevs": 2, 00:20:25.640 "num_base_bdevs_discovered": 2, 00:20:25.640 "num_base_bdevs_operational": 2, 00:20:25.640 "base_bdevs_list": [ 00:20:25.640 { 00:20:25.640 "name": "spare", 00:20:25.640 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:25.640 "is_configured": true, 00:20:25.640 "data_offset": 2048, 00:20:25.640 "data_size": 63488 00:20:25.640 }, 00:20:25.640 { 00:20:25.640 "name": "BaseBdev2", 00:20:25.640 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:25.640 "is_configured": true, 00:20:25.640 "data_offset": 2048, 00:20:25.640 "data_size": 63488 00:20:25.640 } 00:20:25.640 ] 00:20:25.640 }' 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.640 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.899 "name": "raid_bdev1", 00:20:25.899 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:25.899 "strip_size_kb": 0, 00:20:25.899 "state": "online", 00:20:25.899 "raid_level": "raid1", 00:20:25.899 "superblock": true, 00:20:25.899 "num_base_bdevs": 2, 00:20:25.899 "num_base_bdevs_discovered": 2, 00:20:25.899 "num_base_bdevs_operational": 2, 00:20:25.899 "base_bdevs_list": [ 00:20:25.899 { 00:20:25.899 "name": "spare", 00:20:25.899 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:25.899 "is_configured": true, 00:20:25.899 "data_offset": 2048, 00:20:25.899 "data_size": 63488 00:20:25.899 }, 00:20:25.899 { 00:20:25.899 "name": "BaseBdev2", 00:20:25.899 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:25.899 "is_configured": true, 00:20:25.899 "data_offset": 2048, 00:20:25.899 "data_size": 63488 00:20:25.899 } 00:20:25.899 ] 00:20:25.899 }' 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.899 09:13:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.158 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:26.158 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.158 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.158 [2024-11-06 09:13:25.162912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.158 [2024-11-06 09:13:25.162949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.417 00:20:26.417 Latency(us) 00:20:26.417 [2024-11-06T09:13:25.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.417 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:26.417 raid_bdev1 : 7.78 88.23 264.69 0.00 0.00 14981.78 312.55 114543.24 00:20:26.417 [2024-11-06T09:13:25.457Z] =================================================================================================================== 00:20:26.417 [2024-11-06T09:13:25.457Z] Total : 88.23 264.69 0.00 0.00 14981.78 312.55 114543.24 00:20:26.417 [2024-11-06 09:13:25.225367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.417 { 00:20:26.417 "results": [ 00:20:26.417 { 00:20:26.417 "job": "raid_bdev1", 00:20:26.417 "core_mask": "0x1", 00:20:26.417 "workload": "randrw", 00:20:26.417 "percentage": 50, 00:20:26.417 "status": "finished", 00:20:26.417 "queue_depth": 2, 00:20:26.417 "io_size": 3145728, 00:20:26.417 "runtime": 7.775164, 00:20:26.417 "iops": 88.2296502041629, 00:20:26.417 "mibps": 264.68895061248867, 00:20:26.417 "io_failed": 0, 00:20:26.417 "io_timeout": 0, 00:20:26.417 "avg_latency_us": 14981.781708759234, 00:20:26.417 "min_latency_us": 312.54618473895584, 00:20:26.417 "max_latency_us": 114543.24176706828 00:20:26.417 } 00:20:26.417 ], 00:20:26.417 "core_count": 1 00:20:26.417 } 00:20:26.417 [2024-11-06 09:13:25.225544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.417 [2024-11-06 09:13:25.225637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.417 [2024-11-06 09:13:25.225655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.417 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:26.677 /dev/nbd0 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.677 1+0 records in 00:20:26.677 1+0 records out 00:20:26.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269848 s, 15.2 MB/s 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.677 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:26.960 /dev/nbd1 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.960 1+0 records in 00:20:26.960 1+0 records out 00:20:26.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300162 s, 13.6 MB/s 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.960 09:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:27.219 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.478 [2024-11-06 09:13:26.475285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.478 [2024-11-06 09:13:26.475366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.478 [2024-11-06 09:13:26.475390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:27.478 [2024-11-06 09:13:26.475406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.478 [2024-11-06 09:13:26.477964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.478 [2024-11-06 09:13:26.478017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.478 [2024-11-06 09:13:26.478126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:27.478 [2024-11-06 09:13:26.478190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.478 [2024-11-06 09:13:26.478352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.478 spare 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.478 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.737 [2024-11-06 09:13:26.578300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:27.737 [2024-11-06 09:13:26.578336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:27.737 [2024-11-06 09:13:26.578685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:20:27.737 [2024-11-06 09:13:26.578894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:27.737 [2024-11-06 09:13:26.578910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:27.737 [2024-11-06 09:13:26.579153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.737 "name": "raid_bdev1", 00:20:27.737 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:27.737 "strip_size_kb": 0, 00:20:27.737 "state": "online", 00:20:27.737 "raid_level": "raid1", 00:20:27.737 "superblock": true, 00:20:27.737 "num_base_bdevs": 2, 00:20:27.737 "num_base_bdevs_discovered": 2, 00:20:27.737 "num_base_bdevs_operational": 2, 00:20:27.737 "base_bdevs_list": [ 00:20:27.737 { 00:20:27.737 "name": "spare", 00:20:27.737 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:27.737 "is_configured": true, 00:20:27.737 "data_offset": 2048, 00:20:27.737 "data_size": 63488 00:20:27.737 }, 00:20:27.737 { 00:20:27.737 "name": "BaseBdev2", 00:20:27.737 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:27.737 "is_configured": true, 00:20:27.737 "data_offset": 2048, 00:20:27.737 "data_size": 63488 00:20:27.737 } 00:20:27.737 ] 00:20:27.737 }' 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.737 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.995 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.995 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.995 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.995 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.996 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.996 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.996 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.996 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.996 09:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.996 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.254 "name": "raid_bdev1", 00:20:28.254 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:28.254 "strip_size_kb": 0, 00:20:28.254 "state": "online", 00:20:28.254 "raid_level": "raid1", 00:20:28.254 "superblock": true, 00:20:28.254 "num_base_bdevs": 2, 00:20:28.254 "num_base_bdevs_discovered": 2, 00:20:28.254 "num_base_bdevs_operational": 2, 00:20:28.254 "base_bdevs_list": [ 00:20:28.254 { 00:20:28.254 "name": "spare", 00:20:28.254 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:28.254 "is_configured": true, 00:20:28.254 "data_offset": 2048, 00:20:28.254 "data_size": 63488 00:20:28.254 }, 00:20:28.254 { 00:20:28.254 "name": "BaseBdev2", 00:20:28.254 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:28.254 "is_configured": true, 00:20:28.254 "data_offset": 2048, 00:20:28.254 "data_size": 63488 00:20:28.254 } 00:20:28.254 ] 00:20:28.254 }' 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.254 [2024-11-06 09:13:27.186442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.254 "name": "raid_bdev1", 00:20:28.254 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:28.254 "strip_size_kb": 0, 00:20:28.254 "state": "online", 00:20:28.254 "raid_level": "raid1", 00:20:28.254 "superblock": true, 00:20:28.254 "num_base_bdevs": 2, 00:20:28.254 "num_base_bdevs_discovered": 1, 00:20:28.254 "num_base_bdevs_operational": 1, 00:20:28.254 "base_bdevs_list": [ 00:20:28.254 { 00:20:28.254 "name": null, 00:20:28.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.254 "is_configured": false, 00:20:28.254 "data_offset": 0, 00:20:28.254 "data_size": 63488 00:20:28.254 }, 00:20:28.254 { 00:20:28.254 "name": "BaseBdev2", 00:20:28.254 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:28.254 "is_configured": true, 00:20:28.254 "data_offset": 2048, 00:20:28.254 "data_size": 63488 00:20:28.254 } 00:20:28.254 ] 00:20:28.254 }' 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.254 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.821 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.821 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.821 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.821 [2024-11-06 09:13:27.670181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.821 [2024-11-06 09:13:27.670396] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:28.821 [2024-11-06 09:13:27.670414] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:28.821 [2024-11-06 09:13:27.670460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.821 [2024-11-06 09:13:27.687561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:20:28.821 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.821 09:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:28.821 [2024-11-06 09:13:27.689823] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.756 "name": "raid_bdev1", 00:20:29.756 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:29.756 "strip_size_kb": 0, 00:20:29.756 "state": "online", 00:20:29.756 "raid_level": "raid1", 00:20:29.756 "superblock": true, 00:20:29.756 "num_base_bdevs": 2, 00:20:29.756 "num_base_bdevs_discovered": 2, 00:20:29.756 "num_base_bdevs_operational": 2, 00:20:29.756 "process": { 00:20:29.756 "type": "rebuild", 00:20:29.756 "target": "spare", 00:20:29.756 "progress": { 00:20:29.756 "blocks": 20480, 00:20:29.756 "percent": 32 00:20:29.756 } 00:20:29.756 }, 00:20:29.756 "base_bdevs_list": [ 00:20:29.756 { 00:20:29.756 "name": "spare", 00:20:29.756 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:29.756 "is_configured": true, 00:20:29.756 "data_offset": 2048, 00:20:29.756 "data_size": 63488 00:20:29.756 }, 00:20:29.756 { 00:20:29.756 "name": "BaseBdev2", 00:20:29.756 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:29.756 "is_configured": true, 00:20:29.756 "data_offset": 2048, 00:20:29.756 "data_size": 63488 00:20:29.756 } 00:20:29.756 ] 00:20:29.756 }' 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.756 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.015 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.015 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.015 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:30.015 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.015 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.016 [2024-11-06 09:13:28.850271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.016 [2024-11-06 09:13:28.895527] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.016 [2024-11-06 09:13:28.895799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.016 [2024-11-06 09:13:28.895826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.016 [2024-11-06 09:13:28.895836] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.016 "name": "raid_bdev1", 00:20:30.016 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:30.016 "strip_size_kb": 0, 00:20:30.016 "state": "online", 00:20:30.016 "raid_level": "raid1", 00:20:30.016 "superblock": true, 00:20:30.016 "num_base_bdevs": 2, 00:20:30.016 "num_base_bdevs_discovered": 1, 00:20:30.016 "num_base_bdevs_operational": 1, 00:20:30.016 "base_bdevs_list": [ 00:20:30.016 { 00:20:30.016 "name": null, 00:20:30.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.016 "is_configured": false, 00:20:30.016 "data_offset": 0, 00:20:30.016 "data_size": 63488 00:20:30.016 }, 00:20:30.016 { 00:20:30.016 "name": "BaseBdev2", 00:20:30.016 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:30.016 "is_configured": true, 00:20:30.016 "data_offset": 2048, 00:20:30.016 "data_size": 63488 00:20:30.016 } 00:20:30.016 ] 00:20:30.016 }' 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.016 09:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.582 09:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:30.582 09:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.582 09:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.582 [2024-11-06 09:13:29.371014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:30.582 [2024-11-06 09:13:29.371098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.582 [2024-11-06 09:13:29.371129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:30.582 [2024-11-06 09:13:29.371142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.582 [2024-11-06 09:13:29.371657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.582 [2024-11-06 09:13:29.371683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:30.582 [2024-11-06 09:13:29.371787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:30.582 [2024-11-06 09:13:29.371801] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:30.582 [2024-11-06 09:13:29.371815] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:30.582 [2024-11-06 09:13:29.371841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.582 [2024-11-06 09:13:29.388842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:20:30.582 spare 00:20:30.582 09:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.582 09:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:30.582 [2024-11-06 09:13:29.391158] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.519 "name": "raid_bdev1", 00:20:31.519 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:31.519 "strip_size_kb": 0, 00:20:31.519 "state": "online", 00:20:31.519 "raid_level": "raid1", 00:20:31.519 "superblock": true, 00:20:31.519 "num_base_bdevs": 2, 00:20:31.519 "num_base_bdevs_discovered": 2, 00:20:31.519 "num_base_bdevs_operational": 2, 00:20:31.519 "process": { 00:20:31.519 "type": "rebuild", 00:20:31.519 "target": "spare", 00:20:31.519 "progress": { 00:20:31.519 "blocks": 20480, 00:20:31.519 "percent": 32 00:20:31.519 } 00:20:31.519 }, 00:20:31.519 "base_bdevs_list": [ 00:20:31.519 { 00:20:31.519 "name": "spare", 00:20:31.519 "uuid": "ee6b64b3-9595-5256-82b1-8de0e90bf084", 00:20:31.519 "is_configured": true, 00:20:31.519 "data_offset": 2048, 00:20:31.519 "data_size": 63488 00:20:31.519 }, 00:20:31.519 { 00:20:31.519 "name": "BaseBdev2", 00:20:31.519 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:31.519 "is_configured": true, 00:20:31.519 "data_offset": 2048, 00:20:31.519 "data_size": 63488 00:20:31.519 } 00:20:31.519 ] 00:20:31.519 }' 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.519 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:31.519 [2024-11-06 09:13:30.522949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:31.778 [2024-11-06 09:13:30.597087] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:31.778 [2024-11-06 09:13:30.597178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.778 [2024-11-06 09:13:30.597195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:31.778 [2024-11-06 09:13:30.597208] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.778 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.779 "name": "raid_bdev1", 00:20:31.779 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:31.779 "strip_size_kb": 0, 00:20:31.779 "state": "online", 00:20:31.779 "raid_level": "raid1", 00:20:31.779 "superblock": true, 00:20:31.779 "num_base_bdevs": 2, 00:20:31.779 "num_base_bdevs_discovered": 1, 00:20:31.779 "num_base_bdevs_operational": 1, 00:20:31.779 "base_bdevs_list": [ 00:20:31.779 { 00:20:31.779 "name": null, 00:20:31.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.779 "is_configured": false, 00:20:31.779 "data_offset": 0, 00:20:31.779 "data_size": 63488 00:20:31.779 }, 00:20:31.779 { 00:20:31.779 "name": "BaseBdev2", 00:20:31.779 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:31.779 "is_configured": true, 00:20:31.779 "data_offset": 2048, 00:20:31.779 "data_size": 63488 00:20:31.779 } 00:20:31.779 ] 00:20:31.779 }' 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.779 09:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.037 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.294 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.294 "name": "raid_bdev1", 00:20:32.294 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:32.294 "strip_size_kb": 0, 00:20:32.294 "state": "online", 00:20:32.294 "raid_level": "raid1", 00:20:32.294 "superblock": true, 00:20:32.294 "num_base_bdevs": 2, 00:20:32.294 "num_base_bdevs_discovered": 1, 00:20:32.294 "num_base_bdevs_operational": 1, 00:20:32.294 "base_bdevs_list": [ 00:20:32.294 { 00:20:32.294 "name": null, 00:20:32.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.294 "is_configured": false, 00:20:32.294 "data_offset": 0, 00:20:32.294 "data_size": 63488 00:20:32.294 }, 00:20:32.294 { 00:20:32.294 "name": "BaseBdev2", 00:20:32.294 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:32.294 "is_configured": true, 00:20:32.294 "data_offset": 2048, 00:20:32.294 "data_size": 63488 00:20:32.294 } 00:20:32.294 ] 00:20:32.294 }' 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.295 [2024-11-06 09:13:31.172664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:32.295 [2024-11-06 09:13:31.172729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.295 [2024-11-06 09:13:31.172753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:32.295 [2024-11-06 09:13:31.172769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.295 [2024-11-06 09:13:31.173239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.295 [2024-11-06 09:13:31.173290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.295 [2024-11-06 09:13:31.173385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:32.295 [2024-11-06 09:13:31.173409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:32.295 [2024-11-06 09:13:31.173420] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:32.295 [2024-11-06 09:13:31.173435] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:32.295 BaseBdev1 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.295 09:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.227 "name": "raid_bdev1", 00:20:33.227 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:33.227 "strip_size_kb": 0, 00:20:33.227 "state": "online", 00:20:33.227 "raid_level": "raid1", 00:20:33.227 "superblock": true, 00:20:33.227 "num_base_bdevs": 2, 00:20:33.227 "num_base_bdevs_discovered": 1, 00:20:33.227 "num_base_bdevs_operational": 1, 00:20:33.227 "base_bdevs_list": [ 00:20:33.227 { 00:20:33.227 "name": null, 00:20:33.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.227 "is_configured": false, 00:20:33.227 "data_offset": 0, 00:20:33.227 "data_size": 63488 00:20:33.227 }, 00:20:33.227 { 00:20:33.227 "name": "BaseBdev2", 00:20:33.227 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:33.227 "is_configured": true, 00:20:33.227 "data_offset": 2048, 00:20:33.227 "data_size": 63488 00:20:33.227 } 00:20:33.227 ] 00:20:33.227 }' 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.227 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.792 "name": "raid_bdev1", 00:20:33.792 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:33.792 "strip_size_kb": 0, 00:20:33.792 "state": "online", 00:20:33.792 "raid_level": "raid1", 00:20:33.792 "superblock": true, 00:20:33.792 "num_base_bdevs": 2, 00:20:33.792 "num_base_bdevs_discovered": 1, 00:20:33.792 "num_base_bdevs_operational": 1, 00:20:33.792 "base_bdevs_list": [ 00:20:33.792 { 00:20:33.792 "name": null, 00:20:33.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.792 "is_configured": false, 00:20:33.792 "data_offset": 0, 00:20:33.792 "data_size": 63488 00:20:33.792 }, 00:20:33.792 { 00:20:33.792 "name": "BaseBdev2", 00:20:33.792 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:33.792 "is_configured": true, 00:20:33.792 "data_offset": 2048, 00:20:33.792 "data_size": 63488 00:20:33.792 } 00:20:33.792 ] 00:20:33.792 }' 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.792 [2024-11-06 09:13:32.790739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.792 [2024-11-06 09:13:32.791100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:33.792 [2024-11-06 09:13:32.791140] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:33.792 request: 00:20:33.792 { 00:20:33.792 "base_bdev": "BaseBdev1", 00:20:33.792 "raid_bdev": "raid_bdev1", 00:20:33.792 "method": "bdev_raid_add_base_bdev", 00:20:33.792 "req_id": 1 00:20:33.792 } 00:20:33.792 Got JSON-RPC error response 00:20:33.792 response: 00:20:33.792 { 00:20:33.792 "code": -22, 00:20:33.792 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:33.792 } 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:33.792 09:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.167 "name": "raid_bdev1", 00:20:35.167 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:35.167 "strip_size_kb": 0, 00:20:35.167 "state": "online", 00:20:35.167 "raid_level": "raid1", 00:20:35.167 "superblock": true, 00:20:35.167 "num_base_bdevs": 2, 00:20:35.167 "num_base_bdevs_discovered": 1, 00:20:35.167 "num_base_bdevs_operational": 1, 00:20:35.167 "base_bdevs_list": [ 00:20:35.167 { 00:20:35.167 "name": null, 00:20:35.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.167 "is_configured": false, 00:20:35.167 "data_offset": 0, 00:20:35.167 "data_size": 63488 00:20:35.167 }, 00:20:35.167 { 00:20:35.167 "name": "BaseBdev2", 00:20:35.167 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:35.167 "is_configured": true, 00:20:35.167 "data_offset": 2048, 00:20:35.167 "data_size": 63488 00:20:35.167 } 00:20:35.167 ] 00:20:35.167 }' 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.167 09:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.426 "name": "raid_bdev1", 00:20:35.426 "uuid": "1e96f6bd-f345-498f-b3a7-09cf5f6d34a3", 00:20:35.426 "strip_size_kb": 0, 00:20:35.426 "state": "online", 00:20:35.426 "raid_level": "raid1", 00:20:35.426 "superblock": true, 00:20:35.426 "num_base_bdevs": 2, 00:20:35.426 "num_base_bdevs_discovered": 1, 00:20:35.426 "num_base_bdevs_operational": 1, 00:20:35.426 "base_bdevs_list": [ 00:20:35.426 { 00:20:35.426 "name": null, 00:20:35.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.426 "is_configured": false, 00:20:35.426 "data_offset": 0, 00:20:35.426 "data_size": 63488 00:20:35.426 }, 00:20:35.426 { 00:20:35.426 "name": "BaseBdev2", 00:20:35.426 "uuid": "12cccb67-21f8-5f41-b1be-6233c78442a9", 00:20:35.426 "is_configured": true, 00:20:35.426 "data_offset": 2048, 00:20:35.426 "data_size": 63488 00:20:35.426 } 00:20:35.426 ] 00:20:35.426 }' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76583 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 76583 ']' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 76583 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76583 00:20:35.426 killing process with pid 76583 00:20:35.426 Received shutdown signal, test time was about 17.025763 seconds 00:20:35.426 00:20:35.426 Latency(us) 00:20:35.426 [2024-11-06T09:13:34.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.426 [2024-11-06T09:13:34.466Z] =================================================================================================================== 00:20:35.426 [2024-11-06T09:13:34.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76583' 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 76583 00:20:35.426 [2024-11-06 09:13:34.438382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.426 09:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 76583 00:20:35.426 [2024-11-06 09:13:34.438518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.426 [2024-11-06 09:13:34.438584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.426 [2024-11-06 09:13:34.438596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:35.684 [2024-11-06 09:13:34.676587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:37.060 00:20:37.060 real 0m20.297s 00:20:37.060 user 0m26.427s 00:20:37.060 sys 0m2.499s 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 ************************************ 00:20:37.060 END TEST raid_rebuild_test_sb_io 00:20:37.060 ************************************ 00:20:37.060 09:13:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:20:37.060 09:13:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:37.060 09:13:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:37.060 09:13:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:37.060 09:13:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 ************************************ 00:20:37.060 START TEST raid_rebuild_test 00:20:37.060 ************************************ 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77269 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77269 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77269 ']' 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.060 09:13:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 [2024-11-06 09:13:36.061465] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:20:37.060 [2024-11-06 09:13:36.061761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:37.060 Zero copy mechanism will not be used. 00:20:37.060 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77269 ] 00:20:37.322 [2024-11-06 09:13:36.241328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.322 [2024-11-06 09:13:36.360553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.580 [2024-11-06 09:13:36.568681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.580 [2024-11-06 09:13:36.568725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 BaseBdev1_malloc 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 [2024-11-06 09:13:36.943682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:38.148 [2024-11-06 09:13:36.943760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.148 [2024-11-06 09:13:36.943784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:38.148 [2024-11-06 09:13:36.943798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.148 [2024-11-06 09:13:36.946321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.148 [2024-11-06 09:13:36.946365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:38.148 BaseBdev1 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 BaseBdev2_malloc 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 [2024-11-06 09:13:37.001073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:38.148 [2024-11-06 09:13:37.001139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.148 [2024-11-06 09:13:37.001158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:38.148 [2024-11-06 09:13:37.001174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.148 [2024-11-06 09:13:37.003698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.148 [2024-11-06 09:13:37.003739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:38.148 BaseBdev2 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 BaseBdev3_malloc 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 [2024-11-06 09:13:37.069079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:38.148 [2024-11-06 09:13:37.069144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.148 [2024-11-06 09:13:37.069165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:38.148 [2024-11-06 09:13:37.069180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.148 [2024-11-06 09:13:37.071756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.148 [2024-11-06 09:13:37.071805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:38.148 BaseBdev3 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 BaseBdev4_malloc 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 [2024-11-06 09:13:37.126262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:38.148 [2024-11-06 09:13:37.126340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.148 [2024-11-06 09:13:37.126364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:38.148 [2024-11-06 09:13:37.126379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.148 [2024-11-06 09:13:37.128792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.148 [2024-11-06 09:13:37.128837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:38.148 BaseBdev4 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.148 spare_malloc 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.148 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 spare_delay 00:20:38.406 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.406 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:38.406 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.406 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 [2024-11-06 09:13:37.194600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:38.406 [2024-11-06 09:13:37.194787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.406 [2024-11-06 09:13:37.194820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:38.406 [2024-11-06 09:13:37.194837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.406 [2024-11-06 09:13:37.197293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.406 [2024-11-06 09:13:37.197332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:38.406 spare 00:20:38.406 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.407 [2024-11-06 09:13:37.206652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.407 [2024-11-06 09:13:37.208734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:38.407 [2024-11-06 09:13:37.208800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:38.407 [2024-11-06 09:13:37.208852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:38.407 [2024-11-06 09:13:37.208927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:38.407 [2024-11-06 09:13:37.208943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:38.407 [2024-11-06 09:13:37.209212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:38.407 [2024-11-06 09:13:37.209396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:38.407 [2024-11-06 09:13:37.209411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:38.407 [2024-11-06 09:13:37.209561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.407 "name": "raid_bdev1", 00:20:38.407 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:38.407 "strip_size_kb": 0, 00:20:38.407 "state": "online", 00:20:38.407 "raid_level": "raid1", 00:20:38.407 "superblock": false, 00:20:38.407 "num_base_bdevs": 4, 00:20:38.407 "num_base_bdevs_discovered": 4, 00:20:38.407 "num_base_bdevs_operational": 4, 00:20:38.407 "base_bdevs_list": [ 00:20:38.407 { 00:20:38.407 "name": "BaseBdev1", 00:20:38.407 "uuid": "0edb9f31-43b1-5627-9bb6-7ea7f3e15d88", 00:20:38.407 "is_configured": true, 00:20:38.407 "data_offset": 0, 00:20:38.407 "data_size": 65536 00:20:38.407 }, 00:20:38.407 { 00:20:38.407 "name": "BaseBdev2", 00:20:38.407 "uuid": "40ff86d8-f66c-58b2-8779-89e9284ef5cf", 00:20:38.407 "is_configured": true, 00:20:38.407 "data_offset": 0, 00:20:38.407 "data_size": 65536 00:20:38.407 }, 00:20:38.407 { 00:20:38.407 "name": "BaseBdev3", 00:20:38.407 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:38.407 "is_configured": true, 00:20:38.407 "data_offset": 0, 00:20:38.407 "data_size": 65536 00:20:38.407 }, 00:20:38.407 { 00:20:38.407 "name": "BaseBdev4", 00:20:38.407 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:38.407 "is_configured": true, 00:20:38.407 "data_offset": 0, 00:20:38.407 "data_size": 65536 00:20:38.407 } 00:20:38.407 ] 00:20:38.407 }' 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.407 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.665 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.665 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:38.665 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.665 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.665 [2024-11-06 09:13:37.666586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.665 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.937 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:38.937 [2024-11-06 09:13:37.954250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:39.213 /dev/nbd0 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:39.213 09:13:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:39.213 1+0 records in 00:20:39.213 1+0 records out 00:20:39.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340256 s, 12.0 MB/s 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:39.213 09:13:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:45.775 65536+0 records in 00:20:45.775 65536+0 records out 00:20:45.775 33554432 bytes (34 MB, 32 MiB) copied, 6.49022 s, 5.2 MB/s 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:45.775 [2024-11-06 09:13:44.728713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.775 [2024-11-06 09:13:44.744812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.775 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.776 "name": "raid_bdev1", 00:20:45.776 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:45.776 "strip_size_kb": 0, 00:20:45.776 "state": "online", 00:20:45.776 "raid_level": "raid1", 00:20:45.776 "superblock": false, 00:20:45.776 "num_base_bdevs": 4, 00:20:45.776 "num_base_bdevs_discovered": 3, 00:20:45.776 "num_base_bdevs_operational": 3, 00:20:45.776 "base_bdevs_list": [ 00:20:45.776 { 00:20:45.776 "name": null, 00:20:45.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.776 "is_configured": false, 00:20:45.776 "data_offset": 0, 00:20:45.776 "data_size": 65536 00:20:45.776 }, 00:20:45.776 { 00:20:45.776 "name": "BaseBdev2", 00:20:45.776 "uuid": "40ff86d8-f66c-58b2-8779-89e9284ef5cf", 00:20:45.776 "is_configured": true, 00:20:45.776 "data_offset": 0, 00:20:45.776 "data_size": 65536 00:20:45.776 }, 00:20:45.776 { 00:20:45.776 "name": "BaseBdev3", 00:20:45.776 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:45.776 "is_configured": true, 00:20:45.776 "data_offset": 0, 00:20:45.776 "data_size": 65536 00:20:45.776 }, 00:20:45.776 { 00:20:45.776 "name": "BaseBdev4", 00:20:45.776 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:45.776 "is_configured": true, 00:20:45.776 "data_offset": 0, 00:20:45.776 "data_size": 65536 00:20:45.776 } 00:20:45.776 ] 00:20:45.776 }' 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.776 09:13:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.343 09:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.343 09:13:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.343 09:13:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.343 [2024-11-06 09:13:45.136238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.343 [2024-11-06 09:13:45.153597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:20:46.343 09:13:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.343 09:13:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:46.343 [2024-11-06 09:13:45.155834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.278 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.278 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.278 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.278 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.279 "name": "raid_bdev1", 00:20:47.279 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:47.279 "strip_size_kb": 0, 00:20:47.279 "state": "online", 00:20:47.279 "raid_level": "raid1", 00:20:47.279 "superblock": false, 00:20:47.279 "num_base_bdevs": 4, 00:20:47.279 "num_base_bdevs_discovered": 4, 00:20:47.279 "num_base_bdevs_operational": 4, 00:20:47.279 "process": { 00:20:47.279 "type": "rebuild", 00:20:47.279 "target": "spare", 00:20:47.279 "progress": { 00:20:47.279 "blocks": 20480, 00:20:47.279 "percent": 31 00:20:47.279 } 00:20:47.279 }, 00:20:47.279 "base_bdevs_list": [ 00:20:47.279 { 00:20:47.279 "name": "spare", 00:20:47.279 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:47.279 "is_configured": true, 00:20:47.279 "data_offset": 0, 00:20:47.279 "data_size": 65536 00:20:47.279 }, 00:20:47.279 { 00:20:47.279 "name": "BaseBdev2", 00:20:47.279 "uuid": "40ff86d8-f66c-58b2-8779-89e9284ef5cf", 00:20:47.279 "is_configured": true, 00:20:47.279 "data_offset": 0, 00:20:47.279 "data_size": 65536 00:20:47.279 }, 00:20:47.279 { 00:20:47.279 "name": "BaseBdev3", 00:20:47.279 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:47.279 "is_configured": true, 00:20:47.279 "data_offset": 0, 00:20:47.279 "data_size": 65536 00:20:47.279 }, 00:20:47.279 { 00:20:47.279 "name": "BaseBdev4", 00:20:47.279 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:47.279 "is_configured": true, 00:20:47.279 "data_offset": 0, 00:20:47.279 "data_size": 65536 00:20:47.279 } 00:20:47.279 ] 00:20:47.279 }' 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.279 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.279 [2024-11-06 09:13:46.303347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.538 [2024-11-06 09:13:46.361203] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:47.538 [2024-11-06 09:13:46.361501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.538 [2024-11-06 09:13:46.361627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.538 [2024-11-06 09:13:46.361673] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.538 "name": "raid_bdev1", 00:20:47.538 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:47.538 "strip_size_kb": 0, 00:20:47.538 "state": "online", 00:20:47.538 "raid_level": "raid1", 00:20:47.538 "superblock": false, 00:20:47.538 "num_base_bdevs": 4, 00:20:47.538 "num_base_bdevs_discovered": 3, 00:20:47.538 "num_base_bdevs_operational": 3, 00:20:47.538 "base_bdevs_list": [ 00:20:47.538 { 00:20:47.538 "name": null, 00:20:47.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.538 "is_configured": false, 00:20:47.538 "data_offset": 0, 00:20:47.538 "data_size": 65536 00:20:47.538 }, 00:20:47.538 { 00:20:47.538 "name": "BaseBdev2", 00:20:47.538 "uuid": "40ff86d8-f66c-58b2-8779-89e9284ef5cf", 00:20:47.538 "is_configured": true, 00:20:47.538 "data_offset": 0, 00:20:47.538 "data_size": 65536 00:20:47.538 }, 00:20:47.538 { 00:20:47.538 "name": "BaseBdev3", 00:20:47.538 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:47.538 "is_configured": true, 00:20:47.538 "data_offset": 0, 00:20:47.538 "data_size": 65536 00:20:47.538 }, 00:20:47.538 { 00:20:47.538 "name": "BaseBdev4", 00:20:47.538 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:47.538 "is_configured": true, 00:20:47.538 "data_offset": 0, 00:20:47.538 "data_size": 65536 00:20:47.538 } 00:20:47.538 ] 00:20:47.538 }' 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.538 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.134 "name": "raid_bdev1", 00:20:48.134 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:48.134 "strip_size_kb": 0, 00:20:48.134 "state": "online", 00:20:48.134 "raid_level": "raid1", 00:20:48.134 "superblock": false, 00:20:48.134 "num_base_bdevs": 4, 00:20:48.134 "num_base_bdevs_discovered": 3, 00:20:48.134 "num_base_bdevs_operational": 3, 00:20:48.134 "base_bdevs_list": [ 00:20:48.134 { 00:20:48.134 "name": null, 00:20:48.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.134 "is_configured": false, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "name": "BaseBdev2", 00:20:48.134 "uuid": "40ff86d8-f66c-58b2-8779-89e9284ef5cf", 00:20:48.134 "is_configured": true, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "name": "BaseBdev3", 00:20:48.134 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:48.134 "is_configured": true, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "name": "BaseBdev4", 00:20:48.134 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:48.134 "is_configured": true, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 } 00:20:48.134 ] 00:20:48.134 }' 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.134 09:13:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 [2024-11-06 09:13:46.995894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.134 [2024-11-06 09:13:47.010527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:20:48.134 09:13:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.134 09:13:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:48.134 [2024-11-06 09:13:47.012687] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.070 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.070 "name": "raid_bdev1", 00:20:49.070 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:49.070 "strip_size_kb": 0, 00:20:49.070 "state": "online", 00:20:49.070 "raid_level": "raid1", 00:20:49.070 "superblock": false, 00:20:49.070 "num_base_bdevs": 4, 00:20:49.070 "num_base_bdevs_discovered": 4, 00:20:49.070 "num_base_bdevs_operational": 4, 00:20:49.070 "process": { 00:20:49.070 "type": "rebuild", 00:20:49.070 "target": "spare", 00:20:49.070 "progress": { 00:20:49.070 "blocks": 20480, 00:20:49.070 "percent": 31 00:20:49.070 } 00:20:49.070 }, 00:20:49.070 "base_bdevs_list": [ 00:20:49.070 { 00:20:49.070 "name": "spare", 00:20:49.070 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:49.070 "is_configured": true, 00:20:49.070 "data_offset": 0, 00:20:49.070 "data_size": 65536 00:20:49.070 }, 00:20:49.070 { 00:20:49.070 "name": "BaseBdev2", 00:20:49.070 "uuid": "40ff86d8-f66c-58b2-8779-89e9284ef5cf", 00:20:49.070 "is_configured": true, 00:20:49.070 "data_offset": 0, 00:20:49.070 "data_size": 65536 00:20:49.070 }, 00:20:49.070 { 00:20:49.070 "name": "BaseBdev3", 00:20:49.070 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:49.070 "is_configured": true, 00:20:49.070 "data_offset": 0, 00:20:49.071 "data_size": 65536 00:20:49.071 }, 00:20:49.071 { 00:20:49.071 "name": "BaseBdev4", 00:20:49.071 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:49.071 "is_configured": true, 00:20:49.071 "data_offset": 0, 00:20:49.071 "data_size": 65536 00:20:49.071 } 00:20:49.071 ] 00:20:49.071 }' 00:20:49.071 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.329 [2024-11-06 09:13:48.148746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:49.329 [2024-11-06 09:13:48.218311] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.329 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.329 "name": "raid_bdev1", 00:20:49.329 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:49.329 "strip_size_kb": 0, 00:20:49.329 "state": "online", 00:20:49.329 "raid_level": "raid1", 00:20:49.329 "superblock": false, 00:20:49.329 "num_base_bdevs": 4, 00:20:49.329 "num_base_bdevs_discovered": 3, 00:20:49.329 "num_base_bdevs_operational": 3, 00:20:49.329 "process": { 00:20:49.329 "type": "rebuild", 00:20:49.329 "target": "spare", 00:20:49.329 "progress": { 00:20:49.329 "blocks": 24576, 00:20:49.329 "percent": 37 00:20:49.329 } 00:20:49.329 }, 00:20:49.329 "base_bdevs_list": [ 00:20:49.329 { 00:20:49.329 "name": "spare", 00:20:49.329 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:49.329 "is_configured": true, 00:20:49.329 "data_offset": 0, 00:20:49.329 "data_size": 65536 00:20:49.329 }, 00:20:49.329 { 00:20:49.329 "name": null, 00:20:49.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.329 "is_configured": false, 00:20:49.329 "data_offset": 0, 00:20:49.329 "data_size": 65536 00:20:49.329 }, 00:20:49.329 { 00:20:49.329 "name": "BaseBdev3", 00:20:49.329 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:49.329 "is_configured": true, 00:20:49.329 "data_offset": 0, 00:20:49.329 "data_size": 65536 00:20:49.329 }, 00:20:49.329 { 00:20:49.329 "name": "BaseBdev4", 00:20:49.329 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:49.330 "is_configured": true, 00:20:49.330 "data_offset": 0, 00:20:49.330 "data_size": 65536 00:20:49.330 } 00:20:49.330 ] 00:20:49.330 }' 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=443 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.330 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.588 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.588 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.588 09:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.588 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.588 "name": "raid_bdev1", 00:20:49.588 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:49.588 "strip_size_kb": 0, 00:20:49.588 "state": "online", 00:20:49.588 "raid_level": "raid1", 00:20:49.588 "superblock": false, 00:20:49.588 "num_base_bdevs": 4, 00:20:49.588 "num_base_bdevs_discovered": 3, 00:20:49.588 "num_base_bdevs_operational": 3, 00:20:49.588 "process": { 00:20:49.588 "type": "rebuild", 00:20:49.588 "target": "spare", 00:20:49.588 "progress": { 00:20:49.588 "blocks": 26624, 00:20:49.588 "percent": 40 00:20:49.588 } 00:20:49.588 }, 00:20:49.588 "base_bdevs_list": [ 00:20:49.588 { 00:20:49.588 "name": "spare", 00:20:49.588 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:49.588 "is_configured": true, 00:20:49.588 "data_offset": 0, 00:20:49.588 "data_size": 65536 00:20:49.588 }, 00:20:49.588 { 00:20:49.588 "name": null, 00:20:49.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.588 "is_configured": false, 00:20:49.588 "data_offset": 0, 00:20:49.588 "data_size": 65536 00:20:49.588 }, 00:20:49.588 { 00:20:49.588 "name": "BaseBdev3", 00:20:49.588 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:49.588 "is_configured": true, 00:20:49.588 "data_offset": 0, 00:20:49.588 "data_size": 65536 00:20:49.588 }, 00:20:49.588 { 00:20:49.588 "name": "BaseBdev4", 00:20:49.588 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:49.588 "is_configured": true, 00:20:49.589 "data_offset": 0, 00:20:49.589 "data_size": 65536 00:20:49.589 } 00:20:49.589 ] 00:20:49.589 }' 00:20:49.589 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.589 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.589 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.589 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.589 09:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.523 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.523 "name": "raid_bdev1", 00:20:50.523 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:50.523 "strip_size_kb": 0, 00:20:50.523 "state": "online", 00:20:50.523 "raid_level": "raid1", 00:20:50.523 "superblock": false, 00:20:50.523 "num_base_bdevs": 4, 00:20:50.523 "num_base_bdevs_discovered": 3, 00:20:50.523 "num_base_bdevs_operational": 3, 00:20:50.523 "process": { 00:20:50.523 "type": "rebuild", 00:20:50.523 "target": "spare", 00:20:50.523 "progress": { 00:20:50.523 "blocks": 49152, 00:20:50.523 "percent": 75 00:20:50.523 } 00:20:50.523 }, 00:20:50.523 "base_bdevs_list": [ 00:20:50.523 { 00:20:50.523 "name": "spare", 00:20:50.523 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:50.524 "is_configured": true, 00:20:50.524 "data_offset": 0, 00:20:50.524 "data_size": 65536 00:20:50.524 }, 00:20:50.524 { 00:20:50.524 "name": null, 00:20:50.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.524 "is_configured": false, 00:20:50.524 "data_offset": 0, 00:20:50.524 "data_size": 65536 00:20:50.524 }, 00:20:50.524 { 00:20:50.524 "name": "BaseBdev3", 00:20:50.524 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:50.524 "is_configured": true, 00:20:50.524 "data_offset": 0, 00:20:50.524 "data_size": 65536 00:20:50.524 }, 00:20:50.524 { 00:20:50.524 "name": "BaseBdev4", 00:20:50.524 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:50.524 "is_configured": true, 00:20:50.524 "data_offset": 0, 00:20:50.524 "data_size": 65536 00:20:50.524 } 00:20:50.524 ] 00:20:50.524 }' 00:20:50.524 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.781 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.781 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.781 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.781 09:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:51.348 [2024-11-06 09:13:50.227603] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:51.348 [2024-11-06 09:13:50.227691] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:51.348 [2024-11-06 09:13:50.227741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.917 "name": "raid_bdev1", 00:20:51.917 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:51.917 "strip_size_kb": 0, 00:20:51.917 "state": "online", 00:20:51.917 "raid_level": "raid1", 00:20:51.917 "superblock": false, 00:20:51.917 "num_base_bdevs": 4, 00:20:51.917 "num_base_bdevs_discovered": 3, 00:20:51.917 "num_base_bdevs_operational": 3, 00:20:51.917 "base_bdevs_list": [ 00:20:51.917 { 00:20:51.917 "name": "spare", 00:20:51.917 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:51.917 "is_configured": true, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 }, 00:20:51.917 { 00:20:51.917 "name": null, 00:20:51.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.917 "is_configured": false, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 }, 00:20:51.917 { 00:20:51.917 "name": "BaseBdev3", 00:20:51.917 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:51.917 "is_configured": true, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 }, 00:20:51.917 { 00:20:51.917 "name": "BaseBdev4", 00:20:51.917 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:51.917 "is_configured": true, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 } 00:20:51.917 ] 00:20:51.917 }' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.917 "name": "raid_bdev1", 00:20:51.917 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:51.917 "strip_size_kb": 0, 00:20:51.917 "state": "online", 00:20:51.917 "raid_level": "raid1", 00:20:51.917 "superblock": false, 00:20:51.917 "num_base_bdevs": 4, 00:20:51.917 "num_base_bdevs_discovered": 3, 00:20:51.917 "num_base_bdevs_operational": 3, 00:20:51.917 "base_bdevs_list": [ 00:20:51.917 { 00:20:51.917 "name": "spare", 00:20:51.917 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:51.917 "is_configured": true, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 }, 00:20:51.917 { 00:20:51.917 "name": null, 00:20:51.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.917 "is_configured": false, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 }, 00:20:51.917 { 00:20:51.917 "name": "BaseBdev3", 00:20:51.917 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:51.917 "is_configured": true, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 }, 00:20:51.917 { 00:20:51.917 "name": "BaseBdev4", 00:20:51.917 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:51.917 "is_configured": true, 00:20:51.917 "data_offset": 0, 00:20:51.917 "data_size": 65536 00:20:51.917 } 00:20:51.917 ] 00:20:51.917 }' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.917 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.177 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:52.177 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:52.177 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.177 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.177 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.178 09:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.178 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.178 "name": "raid_bdev1", 00:20:52.178 "uuid": "b6fe098c-9ebf-4c40-8ef9-d2f283b45a6e", 00:20:52.178 "strip_size_kb": 0, 00:20:52.178 "state": "online", 00:20:52.178 "raid_level": "raid1", 00:20:52.178 "superblock": false, 00:20:52.178 "num_base_bdevs": 4, 00:20:52.178 "num_base_bdevs_discovered": 3, 00:20:52.178 "num_base_bdevs_operational": 3, 00:20:52.178 "base_bdevs_list": [ 00:20:52.178 { 00:20:52.178 "name": "spare", 00:20:52.178 "uuid": "6eea6146-f840-5b5c-9247-ade46a11ff30", 00:20:52.178 "is_configured": true, 00:20:52.178 "data_offset": 0, 00:20:52.178 "data_size": 65536 00:20:52.178 }, 00:20:52.178 { 00:20:52.178 "name": null, 00:20:52.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.178 "is_configured": false, 00:20:52.178 "data_offset": 0, 00:20:52.178 "data_size": 65536 00:20:52.178 }, 00:20:52.178 { 00:20:52.178 "name": "BaseBdev3", 00:20:52.178 "uuid": "3bbf7689-b5c4-53d8-a1d1-e066ceda6f5a", 00:20:52.178 "is_configured": true, 00:20:52.178 "data_offset": 0, 00:20:52.178 "data_size": 65536 00:20:52.178 }, 00:20:52.178 { 00:20:52.178 "name": "BaseBdev4", 00:20:52.178 "uuid": "8b5a980b-bc80-590e-9951-203074d9b146", 00:20:52.178 "is_configured": true, 00:20:52.178 "data_offset": 0, 00:20:52.178 "data_size": 65536 00:20:52.178 } 00:20:52.178 ] 00:20:52.178 }' 00:20:52.178 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.178 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 [2024-11-06 09:13:51.376833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.437 [2024-11-06 09:13:51.376988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.437 [2024-11-06 09:13:51.377210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.437 [2024-11-06 09:13:51.377416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.437 [2024-11-06 09:13:51.377597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:52.437 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:52.697 /dev/nbd0 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.697 1+0 records in 00:20:52.697 1+0 records out 00:20:52.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400486 s, 10.2 MB/s 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:52.697 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:52.957 /dev/nbd1 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.957 1+0 records in 00:20:52.957 1+0 records out 00:20:52.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381078 s, 10.7 MB/s 00:20:52.957 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.216 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:53.216 09:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.216 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.475 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77269 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77269 ']' 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77269 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77269 00:20:53.734 killing process with pid 77269 00:20:53.734 Received shutdown signal, test time was about 60.000000 seconds 00:20:53.734 00:20:53.734 Latency(us) 00:20:53.734 [2024-11-06T09:13:52.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.734 [2024-11-06T09:13:52.774Z] =================================================================================================================== 00:20:53.734 [2024-11-06T09:13:52.774Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.734 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:53.735 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:53.735 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77269' 00:20:53.735 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77269 00:20:53.735 09:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77269 00:20:53.735 [2024-11-06 09:13:52.729784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.302 [2024-11-06 09:13:53.236667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.680 ************************************ 00:20:55.680 END TEST raid_rebuild_test 00:20:55.680 ************************************ 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:55.680 00:20:55.680 real 0m18.424s 00:20:55.680 user 0m20.068s 00:20:55.680 sys 0m4.032s 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.680 09:13:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:55.680 09:13:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:55.680 09:13:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.680 09:13:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:55.680 ************************************ 00:20:55.680 START TEST raid_rebuild_test_sb 00:20:55.680 ************************************ 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77721 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77721 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 77721 ']' 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:55.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:55.680 09:13:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.680 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.680 Zero copy mechanism will not be used. 00:20:55.680 [2024-11-06 09:13:54.560078] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:20:55.680 [2024-11-06 09:13:54.560215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77721 ] 00:20:56.130 [2024-11-06 09:13:54.744937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.130 [2024-11-06 09:13:54.880760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.130 [2024-11-06 09:13:55.094232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.130 [2024-11-06 09:13:55.094316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.388 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:56.388 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:20:56.388 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.388 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:56.389 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.389 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 BaseBdev1_malloc 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 [2024-11-06 09:13:55.455384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:56.647 [2024-11-06 09:13:55.455455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.647 [2024-11-06 09:13:55.455482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:56.647 [2024-11-06 09:13:55.455497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.647 [2024-11-06 09:13:55.457913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.647 [2024-11-06 09:13:55.458118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:56.647 BaseBdev1 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 BaseBdev2_malloc 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 [2024-11-06 09:13:55.509612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:56.647 [2024-11-06 09:13:55.509680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.647 [2024-11-06 09:13:55.509702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:56.647 [2024-11-06 09:13:55.509718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.647 [2024-11-06 09:13:55.512117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.647 [2024-11-06 09:13:55.512161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:56.647 BaseBdev2 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 BaseBdev3_malloc 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 [2024-11-06 09:13:55.575390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:56.647 [2024-11-06 09:13:55.575444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.647 [2024-11-06 09:13:55.575467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:56.647 [2024-11-06 09:13:55.575481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.647 [2024-11-06 09:13:55.578074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.647 [2024-11-06 09:13:55.578116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:56.647 BaseBdev3 00:20:56.647 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.648 BaseBdev4_malloc 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.648 [2024-11-06 09:13:55.631500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:56.648 [2024-11-06 09:13:55.631556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.648 [2024-11-06 09:13:55.631577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:56.648 [2024-11-06 09:13:55.631591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.648 [2024-11-06 09:13:55.633905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.648 [2024-11-06 09:13:55.633947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:56.648 BaseBdev4 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.648 spare_malloc 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.648 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.907 spare_delay 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.907 [2024-11-06 09:13:55.701482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:56.907 [2024-11-06 09:13:55.701542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.907 [2024-11-06 09:13:55.701564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:56.907 [2024-11-06 09:13:55.701579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.907 [2024-11-06 09:13:55.703913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.907 [2024-11-06 09:13:55.703953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:56.907 spare 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.907 [2024-11-06 09:13:55.713524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:56.907 [2024-11-06 09:13:55.715554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.907 [2024-11-06 09:13:55.715624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:56.907 [2024-11-06 09:13:55.715676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:56.907 [2024-11-06 09:13:55.715857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:56.907 [2024-11-06 09:13:55.715876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:56.907 [2024-11-06 09:13:55.716138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:56.907 [2024-11-06 09:13:55.716332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:56.907 [2024-11-06 09:13:55.716344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:56.907 [2024-11-06 09:13:55.716485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.907 "name": "raid_bdev1", 00:20:56.907 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:20:56.907 "strip_size_kb": 0, 00:20:56.907 "state": "online", 00:20:56.907 "raid_level": "raid1", 00:20:56.907 "superblock": true, 00:20:56.907 "num_base_bdevs": 4, 00:20:56.907 "num_base_bdevs_discovered": 4, 00:20:56.907 "num_base_bdevs_operational": 4, 00:20:56.907 "base_bdevs_list": [ 00:20:56.907 { 00:20:56.907 "name": "BaseBdev1", 00:20:56.907 "uuid": "50def192-e06f-5293-bd5b-8a08be8ef7d5", 00:20:56.907 "is_configured": true, 00:20:56.907 "data_offset": 2048, 00:20:56.907 "data_size": 63488 00:20:56.907 }, 00:20:56.907 { 00:20:56.907 "name": "BaseBdev2", 00:20:56.907 "uuid": "588b9241-6bab-5565-86bd-ad3ebcf1fbba", 00:20:56.907 "is_configured": true, 00:20:56.907 "data_offset": 2048, 00:20:56.907 "data_size": 63488 00:20:56.907 }, 00:20:56.907 { 00:20:56.907 "name": "BaseBdev3", 00:20:56.907 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:20:56.907 "is_configured": true, 00:20:56.907 "data_offset": 2048, 00:20:56.907 "data_size": 63488 00:20:56.907 }, 00:20:56.907 { 00:20:56.907 "name": "BaseBdev4", 00:20:56.907 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:20:56.907 "is_configured": true, 00:20:56.907 "data_offset": 2048, 00:20:56.907 "data_size": 63488 00:20:56.907 } 00:20:56.907 ] 00:20:56.907 }' 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.907 09:13:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:57.165 [2024-11-06 09:13:56.153316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.165 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.423 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.424 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:57.424 [2024-11-06 09:13:56.440580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:57.424 /dev/nbd0 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.682 1+0 records in 00:20:57.682 1+0 records out 00:20:57.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459973 s, 8.9 MB/s 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:57.682 09:13:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:02.947 63488+0 records in 00:21:02.947 63488+0 records out 00:21:02.947 32505856 bytes (33 MB, 31 MiB) copied, 5.36 s, 6.1 MB/s 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.947 09:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.205 [2024-11-06 09:14:02.092667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.205 [2024-11-06 09:14:02.113849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.205 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.205 "name": "raid_bdev1", 00:21:03.205 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:03.205 "strip_size_kb": 0, 00:21:03.205 "state": "online", 00:21:03.205 "raid_level": "raid1", 00:21:03.205 "superblock": true, 00:21:03.205 "num_base_bdevs": 4, 00:21:03.205 "num_base_bdevs_discovered": 3, 00:21:03.205 "num_base_bdevs_operational": 3, 00:21:03.205 "base_bdevs_list": [ 00:21:03.205 { 00:21:03.205 "name": null, 00:21:03.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.205 "is_configured": false, 00:21:03.205 "data_offset": 0, 00:21:03.205 "data_size": 63488 00:21:03.205 }, 00:21:03.205 { 00:21:03.205 "name": "BaseBdev2", 00:21:03.205 "uuid": "588b9241-6bab-5565-86bd-ad3ebcf1fbba", 00:21:03.205 "is_configured": true, 00:21:03.205 "data_offset": 2048, 00:21:03.205 "data_size": 63488 00:21:03.205 }, 00:21:03.205 { 00:21:03.205 "name": "BaseBdev3", 00:21:03.205 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:03.205 "is_configured": true, 00:21:03.206 "data_offset": 2048, 00:21:03.206 "data_size": 63488 00:21:03.206 }, 00:21:03.206 { 00:21:03.206 "name": "BaseBdev4", 00:21:03.206 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:03.206 "is_configured": true, 00:21:03.206 "data_offset": 2048, 00:21:03.206 "data_size": 63488 00:21:03.206 } 00:21:03.206 ] 00:21:03.206 }' 00:21:03.206 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.206 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.773 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:03.773 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.773 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.773 [2024-11-06 09:14:02.529290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.773 [2024-11-06 09:14:02.544932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:21:03.773 09:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.773 09:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:03.773 [2024-11-06 09:14:02.547058] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.709 "name": "raid_bdev1", 00:21:04.709 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:04.709 "strip_size_kb": 0, 00:21:04.709 "state": "online", 00:21:04.709 "raid_level": "raid1", 00:21:04.709 "superblock": true, 00:21:04.709 "num_base_bdevs": 4, 00:21:04.709 "num_base_bdevs_discovered": 4, 00:21:04.709 "num_base_bdevs_operational": 4, 00:21:04.709 "process": { 00:21:04.709 "type": "rebuild", 00:21:04.709 "target": "spare", 00:21:04.709 "progress": { 00:21:04.709 "blocks": 20480, 00:21:04.709 "percent": 32 00:21:04.709 } 00:21:04.709 }, 00:21:04.709 "base_bdevs_list": [ 00:21:04.709 { 00:21:04.709 "name": "spare", 00:21:04.709 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:04.709 "is_configured": true, 00:21:04.709 "data_offset": 2048, 00:21:04.709 "data_size": 63488 00:21:04.709 }, 00:21:04.709 { 00:21:04.709 "name": "BaseBdev2", 00:21:04.709 "uuid": "588b9241-6bab-5565-86bd-ad3ebcf1fbba", 00:21:04.709 "is_configured": true, 00:21:04.709 "data_offset": 2048, 00:21:04.709 "data_size": 63488 00:21:04.709 }, 00:21:04.709 { 00:21:04.709 "name": "BaseBdev3", 00:21:04.709 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:04.709 "is_configured": true, 00:21:04.709 "data_offset": 2048, 00:21:04.709 "data_size": 63488 00:21:04.709 }, 00:21:04.709 { 00:21:04.709 "name": "BaseBdev4", 00:21:04.709 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:04.709 "is_configured": true, 00:21:04.709 "data_offset": 2048, 00:21:04.709 "data_size": 63488 00:21:04.709 } 00:21:04.709 ] 00:21:04.709 }' 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.709 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.709 [2024-11-06 09:14:03.686498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.969 [2024-11-06 09:14:03.752749] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:04.969 [2024-11-06 09:14:03.752848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.969 [2024-11-06 09:14:03.752866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.969 [2024-11-06 09:14:03.752878] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.969 "name": "raid_bdev1", 00:21:04.969 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:04.969 "strip_size_kb": 0, 00:21:04.969 "state": "online", 00:21:04.969 "raid_level": "raid1", 00:21:04.969 "superblock": true, 00:21:04.969 "num_base_bdevs": 4, 00:21:04.969 "num_base_bdevs_discovered": 3, 00:21:04.969 "num_base_bdevs_operational": 3, 00:21:04.969 "base_bdevs_list": [ 00:21:04.969 { 00:21:04.969 "name": null, 00:21:04.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.969 "is_configured": false, 00:21:04.969 "data_offset": 0, 00:21:04.969 "data_size": 63488 00:21:04.969 }, 00:21:04.969 { 00:21:04.969 "name": "BaseBdev2", 00:21:04.969 "uuid": "588b9241-6bab-5565-86bd-ad3ebcf1fbba", 00:21:04.969 "is_configured": true, 00:21:04.969 "data_offset": 2048, 00:21:04.969 "data_size": 63488 00:21:04.969 }, 00:21:04.969 { 00:21:04.969 "name": "BaseBdev3", 00:21:04.969 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:04.969 "is_configured": true, 00:21:04.969 "data_offset": 2048, 00:21:04.969 "data_size": 63488 00:21:04.969 }, 00:21:04.969 { 00:21:04.969 "name": "BaseBdev4", 00:21:04.969 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:04.969 "is_configured": true, 00:21:04.969 "data_offset": 2048, 00:21:04.969 "data_size": 63488 00:21:04.969 } 00:21:04.969 ] 00:21:04.969 }' 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.969 09:14:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.228 "name": "raid_bdev1", 00:21:05.228 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:05.228 "strip_size_kb": 0, 00:21:05.228 "state": "online", 00:21:05.228 "raid_level": "raid1", 00:21:05.228 "superblock": true, 00:21:05.228 "num_base_bdevs": 4, 00:21:05.228 "num_base_bdevs_discovered": 3, 00:21:05.228 "num_base_bdevs_operational": 3, 00:21:05.228 "base_bdevs_list": [ 00:21:05.228 { 00:21:05.228 "name": null, 00:21:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.228 "is_configured": false, 00:21:05.228 "data_offset": 0, 00:21:05.228 "data_size": 63488 00:21:05.228 }, 00:21:05.228 { 00:21:05.228 "name": "BaseBdev2", 00:21:05.228 "uuid": "588b9241-6bab-5565-86bd-ad3ebcf1fbba", 00:21:05.228 "is_configured": true, 00:21:05.228 "data_offset": 2048, 00:21:05.228 "data_size": 63488 00:21:05.228 }, 00:21:05.228 { 00:21:05.228 "name": "BaseBdev3", 00:21:05.228 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:05.228 "is_configured": true, 00:21:05.228 "data_offset": 2048, 00:21:05.228 "data_size": 63488 00:21:05.228 }, 00:21:05.228 { 00:21:05.228 "name": "BaseBdev4", 00:21:05.228 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:05.228 "is_configured": true, 00:21:05.228 "data_offset": 2048, 00:21:05.228 "data_size": 63488 00:21:05.228 } 00:21:05.228 ] 00:21:05.228 }' 00:21:05.228 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.486 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:05.486 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.487 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:05.487 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.487 09:14:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.487 09:14:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.487 [2024-11-06 09:14:04.342291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.487 [2024-11-06 09:14:04.356064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:21:05.487 09:14:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.487 09:14:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:05.487 [2024-11-06 09:14:04.358201] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.423 "name": "raid_bdev1", 00:21:06.423 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:06.423 "strip_size_kb": 0, 00:21:06.423 "state": "online", 00:21:06.423 "raid_level": "raid1", 00:21:06.423 "superblock": true, 00:21:06.423 "num_base_bdevs": 4, 00:21:06.423 "num_base_bdevs_discovered": 4, 00:21:06.423 "num_base_bdevs_operational": 4, 00:21:06.423 "process": { 00:21:06.423 "type": "rebuild", 00:21:06.423 "target": "spare", 00:21:06.423 "progress": { 00:21:06.423 "blocks": 20480, 00:21:06.423 "percent": 32 00:21:06.423 } 00:21:06.423 }, 00:21:06.423 "base_bdevs_list": [ 00:21:06.423 { 00:21:06.423 "name": "spare", 00:21:06.423 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:06.423 "is_configured": true, 00:21:06.423 "data_offset": 2048, 00:21:06.423 "data_size": 63488 00:21:06.423 }, 00:21:06.423 { 00:21:06.423 "name": "BaseBdev2", 00:21:06.423 "uuid": "588b9241-6bab-5565-86bd-ad3ebcf1fbba", 00:21:06.423 "is_configured": true, 00:21:06.423 "data_offset": 2048, 00:21:06.423 "data_size": 63488 00:21:06.423 }, 00:21:06.423 { 00:21:06.423 "name": "BaseBdev3", 00:21:06.423 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:06.423 "is_configured": true, 00:21:06.423 "data_offset": 2048, 00:21:06.423 "data_size": 63488 00:21:06.423 }, 00:21:06.423 { 00:21:06.423 "name": "BaseBdev4", 00:21:06.423 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:06.423 "is_configured": true, 00:21:06.423 "data_offset": 2048, 00:21:06.423 "data_size": 63488 00:21:06.423 } 00:21:06.423 ] 00:21:06.423 }' 00:21:06.423 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:06.682 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.682 [2024-11-06 09:14:05.502417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:06.682 [2024-11-06 09:14:05.663856] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.682 "name": "raid_bdev1", 00:21:06.682 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:06.682 "strip_size_kb": 0, 00:21:06.682 "state": "online", 00:21:06.682 "raid_level": "raid1", 00:21:06.682 "superblock": true, 00:21:06.682 "num_base_bdevs": 4, 00:21:06.682 "num_base_bdevs_discovered": 3, 00:21:06.682 "num_base_bdevs_operational": 3, 00:21:06.682 "process": { 00:21:06.682 "type": "rebuild", 00:21:06.682 "target": "spare", 00:21:06.682 "progress": { 00:21:06.682 "blocks": 24576, 00:21:06.682 "percent": 38 00:21:06.682 } 00:21:06.682 }, 00:21:06.682 "base_bdevs_list": [ 00:21:06.682 { 00:21:06.682 "name": "spare", 00:21:06.682 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:06.682 "is_configured": true, 00:21:06.682 "data_offset": 2048, 00:21:06.682 "data_size": 63488 00:21:06.682 }, 00:21:06.682 { 00:21:06.682 "name": null, 00:21:06.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.682 "is_configured": false, 00:21:06.682 "data_offset": 0, 00:21:06.682 "data_size": 63488 00:21:06.682 }, 00:21:06.682 { 00:21:06.682 "name": "BaseBdev3", 00:21:06.682 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:06.682 "is_configured": true, 00:21:06.682 "data_offset": 2048, 00:21:06.682 "data_size": 63488 00:21:06.682 }, 00:21:06.682 { 00:21:06.682 "name": "BaseBdev4", 00:21:06.682 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:06.682 "is_configured": true, 00:21:06.682 "data_offset": 2048, 00:21:06.682 "data_size": 63488 00:21:06.682 } 00:21:06.682 ] 00:21:06.682 }' 00:21:06.682 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.941 "name": "raid_bdev1", 00:21:06.941 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:06.941 "strip_size_kb": 0, 00:21:06.941 "state": "online", 00:21:06.941 "raid_level": "raid1", 00:21:06.941 "superblock": true, 00:21:06.941 "num_base_bdevs": 4, 00:21:06.941 "num_base_bdevs_discovered": 3, 00:21:06.941 "num_base_bdevs_operational": 3, 00:21:06.941 "process": { 00:21:06.941 "type": "rebuild", 00:21:06.941 "target": "spare", 00:21:06.941 "progress": { 00:21:06.941 "blocks": 26624, 00:21:06.941 "percent": 41 00:21:06.941 } 00:21:06.941 }, 00:21:06.941 "base_bdevs_list": [ 00:21:06.941 { 00:21:06.941 "name": "spare", 00:21:06.941 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:06.941 "is_configured": true, 00:21:06.941 "data_offset": 2048, 00:21:06.941 "data_size": 63488 00:21:06.941 }, 00:21:06.941 { 00:21:06.941 "name": null, 00:21:06.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.941 "is_configured": false, 00:21:06.941 "data_offset": 0, 00:21:06.941 "data_size": 63488 00:21:06.941 }, 00:21:06.941 { 00:21:06.941 "name": "BaseBdev3", 00:21:06.941 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:06.941 "is_configured": true, 00:21:06.941 "data_offset": 2048, 00:21:06.941 "data_size": 63488 00:21:06.941 }, 00:21:06.941 { 00:21:06.941 "name": "BaseBdev4", 00:21:06.941 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:06.941 "is_configured": true, 00:21:06.941 "data_offset": 2048, 00:21:06.941 "data_size": 63488 00:21:06.941 } 00:21:06.941 ] 00:21:06.941 }' 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.941 09:14:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.318 09:14:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.318 "name": "raid_bdev1", 00:21:08.318 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:08.318 "strip_size_kb": 0, 00:21:08.318 "state": "online", 00:21:08.318 "raid_level": "raid1", 00:21:08.318 "superblock": true, 00:21:08.318 "num_base_bdevs": 4, 00:21:08.318 "num_base_bdevs_discovered": 3, 00:21:08.318 "num_base_bdevs_operational": 3, 00:21:08.318 "process": { 00:21:08.318 "type": "rebuild", 00:21:08.318 "target": "spare", 00:21:08.318 "progress": { 00:21:08.318 "blocks": 51200, 00:21:08.318 "percent": 80 00:21:08.318 } 00:21:08.318 }, 00:21:08.318 "base_bdevs_list": [ 00:21:08.318 { 00:21:08.318 "name": "spare", 00:21:08.318 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:08.318 "is_configured": true, 00:21:08.318 "data_offset": 2048, 00:21:08.318 "data_size": 63488 00:21:08.318 }, 00:21:08.318 { 00:21:08.318 "name": null, 00:21:08.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.318 "is_configured": false, 00:21:08.318 "data_offset": 0, 00:21:08.318 "data_size": 63488 00:21:08.318 }, 00:21:08.318 { 00:21:08.318 "name": "BaseBdev3", 00:21:08.318 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:08.318 "is_configured": true, 00:21:08.318 "data_offset": 2048, 00:21:08.318 "data_size": 63488 00:21:08.318 }, 00:21:08.318 { 00:21:08.318 "name": "BaseBdev4", 00:21:08.318 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:08.318 "is_configured": true, 00:21:08.318 "data_offset": 2048, 00:21:08.318 "data_size": 63488 00:21:08.318 } 00:21:08.318 ] 00:21:08.318 }' 00:21:08.318 09:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.318 09:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.318 09:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.318 09:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.318 09:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:08.577 [2024-11-06 09:14:07.572764] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:08.577 [2024-11-06 09:14:07.572855] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:08.577 [2024-11-06 09:14:07.572984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.170 "name": "raid_bdev1", 00:21:09.170 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:09.170 "strip_size_kb": 0, 00:21:09.170 "state": "online", 00:21:09.170 "raid_level": "raid1", 00:21:09.170 "superblock": true, 00:21:09.170 "num_base_bdevs": 4, 00:21:09.170 "num_base_bdevs_discovered": 3, 00:21:09.170 "num_base_bdevs_operational": 3, 00:21:09.170 "base_bdevs_list": [ 00:21:09.170 { 00:21:09.170 "name": "spare", 00:21:09.170 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:09.170 "is_configured": true, 00:21:09.170 "data_offset": 2048, 00:21:09.170 "data_size": 63488 00:21:09.170 }, 00:21:09.170 { 00:21:09.170 "name": null, 00:21:09.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.170 "is_configured": false, 00:21:09.170 "data_offset": 0, 00:21:09.170 "data_size": 63488 00:21:09.170 }, 00:21:09.170 { 00:21:09.170 "name": "BaseBdev3", 00:21:09.170 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:09.170 "is_configured": true, 00:21:09.170 "data_offset": 2048, 00:21:09.170 "data_size": 63488 00:21:09.170 }, 00:21:09.170 { 00:21:09.170 "name": "BaseBdev4", 00:21:09.170 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:09.170 "is_configured": true, 00:21:09.170 "data_offset": 2048, 00:21:09.170 "data_size": 63488 00:21:09.170 } 00:21:09.170 ] 00:21:09.170 }' 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:09.170 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.429 "name": "raid_bdev1", 00:21:09.429 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:09.429 "strip_size_kb": 0, 00:21:09.429 "state": "online", 00:21:09.429 "raid_level": "raid1", 00:21:09.429 "superblock": true, 00:21:09.429 "num_base_bdevs": 4, 00:21:09.429 "num_base_bdevs_discovered": 3, 00:21:09.429 "num_base_bdevs_operational": 3, 00:21:09.429 "base_bdevs_list": [ 00:21:09.429 { 00:21:09.429 "name": "spare", 00:21:09.429 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:09.429 "is_configured": true, 00:21:09.429 "data_offset": 2048, 00:21:09.429 "data_size": 63488 00:21:09.429 }, 00:21:09.429 { 00:21:09.429 "name": null, 00:21:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.429 "is_configured": false, 00:21:09.429 "data_offset": 0, 00:21:09.429 "data_size": 63488 00:21:09.429 }, 00:21:09.429 { 00:21:09.429 "name": "BaseBdev3", 00:21:09.429 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:09.429 "is_configured": true, 00:21:09.429 "data_offset": 2048, 00:21:09.429 "data_size": 63488 00:21:09.429 }, 00:21:09.429 { 00:21:09.429 "name": "BaseBdev4", 00:21:09.429 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:09.429 "is_configured": true, 00:21:09.429 "data_offset": 2048, 00:21:09.429 "data_size": 63488 00:21:09.429 } 00:21:09.429 ] 00:21:09.429 }' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.429 "name": "raid_bdev1", 00:21:09.429 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:09.429 "strip_size_kb": 0, 00:21:09.429 "state": "online", 00:21:09.429 "raid_level": "raid1", 00:21:09.429 "superblock": true, 00:21:09.429 "num_base_bdevs": 4, 00:21:09.429 "num_base_bdevs_discovered": 3, 00:21:09.429 "num_base_bdevs_operational": 3, 00:21:09.429 "base_bdevs_list": [ 00:21:09.429 { 00:21:09.429 "name": "spare", 00:21:09.429 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:09.429 "is_configured": true, 00:21:09.429 "data_offset": 2048, 00:21:09.429 "data_size": 63488 00:21:09.429 }, 00:21:09.429 { 00:21:09.429 "name": null, 00:21:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.429 "is_configured": false, 00:21:09.429 "data_offset": 0, 00:21:09.429 "data_size": 63488 00:21:09.429 }, 00:21:09.429 { 00:21:09.429 "name": "BaseBdev3", 00:21:09.429 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:09.429 "is_configured": true, 00:21:09.429 "data_offset": 2048, 00:21:09.429 "data_size": 63488 00:21:09.429 }, 00:21:09.429 { 00:21:09.429 "name": "BaseBdev4", 00:21:09.429 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:09.429 "is_configured": true, 00:21:09.429 "data_offset": 2048, 00:21:09.429 "data_size": 63488 00:21:09.429 } 00:21:09.429 ] 00:21:09.429 }' 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.429 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.996 [2024-11-06 09:14:08.818185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:09.996 [2024-11-06 09:14:08.818226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:09.996 [2024-11-06 09:14:08.818328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.996 [2024-11-06 09:14:08.818408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:09.996 [2024-11-06 09:14:08.818420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:09.996 09:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:10.256 /dev/nbd0 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:10.256 1+0 records in 00:21:10.256 1+0 records out 00:21:10.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408173 s, 10.0 MB/s 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:10.256 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:10.514 /dev/nbd1 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:10.514 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:10.515 1+0 records in 00:21:10.515 1+0 records out 00:21:10.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345956 s, 11.8 MB/s 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:10.515 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:10.773 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:11.030 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:11.031 09:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.031 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.031 [2024-11-06 09:14:10.068438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:11.031 [2024-11-06 09:14:10.068505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.031 [2024-11-06 09:14:10.068531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:11.031 [2024-11-06 09:14:10.068543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.290 [2024-11-06 09:14:10.071220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.290 [2024-11-06 09:14:10.071265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:11.290 [2024-11-06 09:14:10.071397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:11.290 [2024-11-06 09:14:10.071453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.290 [2024-11-06 09:14:10.071604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:11.290 [2024-11-06 09:14:10.071701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:11.290 spare 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.290 [2024-11-06 09:14:10.171641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:11.290 [2024-11-06 09:14:10.171684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:11.290 [2024-11-06 09:14:10.172059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:21:11.290 [2024-11-06 09:14:10.172271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:11.290 [2024-11-06 09:14:10.172306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:11.290 [2024-11-06 09:14:10.172510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.290 "name": "raid_bdev1", 00:21:11.290 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:11.290 "strip_size_kb": 0, 00:21:11.290 "state": "online", 00:21:11.290 "raid_level": "raid1", 00:21:11.290 "superblock": true, 00:21:11.290 "num_base_bdevs": 4, 00:21:11.290 "num_base_bdevs_discovered": 3, 00:21:11.290 "num_base_bdevs_operational": 3, 00:21:11.290 "base_bdevs_list": [ 00:21:11.290 { 00:21:11.290 "name": "spare", 00:21:11.290 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:11.290 "is_configured": true, 00:21:11.290 "data_offset": 2048, 00:21:11.290 "data_size": 63488 00:21:11.290 }, 00:21:11.290 { 00:21:11.290 "name": null, 00:21:11.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.290 "is_configured": false, 00:21:11.290 "data_offset": 2048, 00:21:11.290 "data_size": 63488 00:21:11.290 }, 00:21:11.290 { 00:21:11.290 "name": "BaseBdev3", 00:21:11.290 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:11.290 "is_configured": true, 00:21:11.290 "data_offset": 2048, 00:21:11.290 "data_size": 63488 00:21:11.290 }, 00:21:11.290 { 00:21:11.290 "name": "BaseBdev4", 00:21:11.290 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:11.290 "is_configured": true, 00:21:11.290 "data_offset": 2048, 00:21:11.290 "data_size": 63488 00:21:11.290 } 00:21:11.290 ] 00:21:11.290 }' 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.290 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.549 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.808 "name": "raid_bdev1", 00:21:11.808 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:11.808 "strip_size_kb": 0, 00:21:11.808 "state": "online", 00:21:11.808 "raid_level": "raid1", 00:21:11.808 "superblock": true, 00:21:11.808 "num_base_bdevs": 4, 00:21:11.808 "num_base_bdevs_discovered": 3, 00:21:11.808 "num_base_bdevs_operational": 3, 00:21:11.808 "base_bdevs_list": [ 00:21:11.808 { 00:21:11.808 "name": "spare", 00:21:11.808 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:11.808 "is_configured": true, 00:21:11.808 "data_offset": 2048, 00:21:11.808 "data_size": 63488 00:21:11.808 }, 00:21:11.808 { 00:21:11.808 "name": null, 00:21:11.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.808 "is_configured": false, 00:21:11.808 "data_offset": 2048, 00:21:11.808 "data_size": 63488 00:21:11.808 }, 00:21:11.808 { 00:21:11.808 "name": "BaseBdev3", 00:21:11.808 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:11.808 "is_configured": true, 00:21:11.808 "data_offset": 2048, 00:21:11.808 "data_size": 63488 00:21:11.808 }, 00:21:11.808 { 00:21:11.808 "name": "BaseBdev4", 00:21:11.808 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:11.808 "is_configured": true, 00:21:11.808 "data_offset": 2048, 00:21:11.808 "data_size": 63488 00:21:11.808 } 00:21:11.808 ] 00:21:11.808 }' 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.808 [2024-11-06 09:14:10.751718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.808 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.809 "name": "raid_bdev1", 00:21:11.809 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:11.809 "strip_size_kb": 0, 00:21:11.809 "state": "online", 00:21:11.809 "raid_level": "raid1", 00:21:11.809 "superblock": true, 00:21:11.809 "num_base_bdevs": 4, 00:21:11.809 "num_base_bdevs_discovered": 2, 00:21:11.809 "num_base_bdevs_operational": 2, 00:21:11.809 "base_bdevs_list": [ 00:21:11.809 { 00:21:11.809 "name": null, 00:21:11.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.809 "is_configured": false, 00:21:11.809 "data_offset": 0, 00:21:11.809 "data_size": 63488 00:21:11.809 }, 00:21:11.809 { 00:21:11.809 "name": null, 00:21:11.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.809 "is_configured": false, 00:21:11.809 "data_offset": 2048, 00:21:11.809 "data_size": 63488 00:21:11.809 }, 00:21:11.809 { 00:21:11.809 "name": "BaseBdev3", 00:21:11.809 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:11.809 "is_configured": true, 00:21:11.809 "data_offset": 2048, 00:21:11.809 "data_size": 63488 00:21:11.809 }, 00:21:11.809 { 00:21:11.809 "name": "BaseBdev4", 00:21:11.809 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:11.809 "is_configured": true, 00:21:11.809 "data_offset": 2048, 00:21:11.809 "data_size": 63488 00:21:11.809 } 00:21:11.809 ] 00:21:11.809 }' 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.809 09:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.376 09:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:12.376 09:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.376 09:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.376 [2024-11-06 09:14:11.235074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.376 [2024-11-06 09:14:11.235287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:12.376 [2024-11-06 09:14:11.235308] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:12.376 [2024-11-06 09:14:11.235348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.376 [2024-11-06 09:14:11.249081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:21:12.376 09:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.376 09:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:12.376 [2024-11-06 09:14:11.251254] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.311 "name": "raid_bdev1", 00:21:13.311 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:13.311 "strip_size_kb": 0, 00:21:13.311 "state": "online", 00:21:13.311 "raid_level": "raid1", 00:21:13.311 "superblock": true, 00:21:13.311 "num_base_bdevs": 4, 00:21:13.311 "num_base_bdevs_discovered": 3, 00:21:13.311 "num_base_bdevs_operational": 3, 00:21:13.311 "process": { 00:21:13.311 "type": "rebuild", 00:21:13.311 "target": "spare", 00:21:13.311 "progress": { 00:21:13.311 "blocks": 20480, 00:21:13.311 "percent": 32 00:21:13.311 } 00:21:13.311 }, 00:21:13.311 "base_bdevs_list": [ 00:21:13.311 { 00:21:13.311 "name": "spare", 00:21:13.311 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:13.311 "is_configured": true, 00:21:13.311 "data_offset": 2048, 00:21:13.311 "data_size": 63488 00:21:13.311 }, 00:21:13.311 { 00:21:13.311 "name": null, 00:21:13.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.311 "is_configured": false, 00:21:13.311 "data_offset": 2048, 00:21:13.311 "data_size": 63488 00:21:13.311 }, 00:21:13.311 { 00:21:13.311 "name": "BaseBdev3", 00:21:13.311 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:13.311 "is_configured": true, 00:21:13.311 "data_offset": 2048, 00:21:13.311 "data_size": 63488 00:21:13.311 }, 00:21:13.311 { 00:21:13.311 "name": "BaseBdev4", 00:21:13.311 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:13.311 "is_configured": true, 00:21:13.311 "data_offset": 2048, 00:21:13.311 "data_size": 63488 00:21:13.311 } 00:21:13.311 ] 00:21:13.311 }' 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.311 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 [2024-11-06 09:14:12.399315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.570 [2024-11-06 09:14:12.456765] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:13.570 [2024-11-06 09:14:12.456826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.570 [2024-11-06 09:14:12.456846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.570 [2024-11-06 09:14:12.456855] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.570 "name": "raid_bdev1", 00:21:13.570 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:13.570 "strip_size_kb": 0, 00:21:13.570 "state": "online", 00:21:13.570 "raid_level": "raid1", 00:21:13.570 "superblock": true, 00:21:13.570 "num_base_bdevs": 4, 00:21:13.570 "num_base_bdevs_discovered": 2, 00:21:13.570 "num_base_bdevs_operational": 2, 00:21:13.570 "base_bdevs_list": [ 00:21:13.570 { 00:21:13.570 "name": null, 00:21:13.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.570 "is_configured": false, 00:21:13.570 "data_offset": 0, 00:21:13.570 "data_size": 63488 00:21:13.570 }, 00:21:13.570 { 00:21:13.570 "name": null, 00:21:13.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.570 "is_configured": false, 00:21:13.570 "data_offset": 2048, 00:21:13.570 "data_size": 63488 00:21:13.570 }, 00:21:13.570 { 00:21:13.570 "name": "BaseBdev3", 00:21:13.570 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:13.570 "is_configured": true, 00:21:13.570 "data_offset": 2048, 00:21:13.570 "data_size": 63488 00:21:13.570 }, 00:21:13.570 { 00:21:13.570 "name": "BaseBdev4", 00:21:13.570 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:13.570 "is_configured": true, 00:21:13.570 "data_offset": 2048, 00:21:13.570 "data_size": 63488 00:21:13.570 } 00:21:13.570 ] 00:21:13.570 }' 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.570 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.870 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:13.870 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.870 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.870 [2024-11-06 09:14:12.898390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.870 [2024-11-06 09:14:12.898458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.870 [2024-11-06 09:14:12.898491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:13.870 [2024-11-06 09:14:12.898503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.871 [2024-11-06 09:14:12.898990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.871 [2024-11-06 09:14:12.899009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.871 [2024-11-06 09:14:12.899108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:13.871 [2024-11-06 09:14:12.899122] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:13.871 [2024-11-06 09:14:12.899138] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:13.871 [2024-11-06 09:14:12.899171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.129 [2024-11-06 09:14:12.912962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:21:14.129 spare 00:21:14.129 09:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.129 09:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:14.129 [2024-11-06 09:14:12.915091] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.065 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.065 "name": "raid_bdev1", 00:21:15.065 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:15.065 "strip_size_kb": 0, 00:21:15.065 "state": "online", 00:21:15.065 "raid_level": "raid1", 00:21:15.065 "superblock": true, 00:21:15.065 "num_base_bdevs": 4, 00:21:15.065 "num_base_bdevs_discovered": 3, 00:21:15.065 "num_base_bdevs_operational": 3, 00:21:15.065 "process": { 00:21:15.065 "type": "rebuild", 00:21:15.065 "target": "spare", 00:21:15.065 "progress": { 00:21:15.065 "blocks": 20480, 00:21:15.065 "percent": 32 00:21:15.065 } 00:21:15.065 }, 00:21:15.065 "base_bdevs_list": [ 00:21:15.065 { 00:21:15.065 "name": "spare", 00:21:15.065 "uuid": "e118128a-1025-522a-b2c5-1fb14e9c352a", 00:21:15.065 "is_configured": true, 00:21:15.065 "data_offset": 2048, 00:21:15.065 "data_size": 63488 00:21:15.065 }, 00:21:15.065 { 00:21:15.065 "name": null, 00:21:15.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.065 "is_configured": false, 00:21:15.065 "data_offset": 2048, 00:21:15.065 "data_size": 63488 00:21:15.065 }, 00:21:15.065 { 00:21:15.066 "name": "BaseBdev3", 00:21:15.066 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:15.066 "is_configured": true, 00:21:15.066 "data_offset": 2048, 00:21:15.066 "data_size": 63488 00:21:15.066 }, 00:21:15.066 { 00:21:15.066 "name": "BaseBdev4", 00:21:15.066 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:15.066 "is_configured": true, 00:21:15.066 "data_offset": 2048, 00:21:15.066 "data_size": 63488 00:21:15.066 } 00:21:15.066 ] 00:21:15.066 }' 00:21:15.066 09:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.066 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.066 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.066 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.066 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:15.066 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.066 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.066 [2024-11-06 09:14:14.050923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.324 [2024-11-06 09:14:14.120394] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:15.324 [2024-11-06 09:14:14.120593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.324 [2024-11-06 09:14:14.120616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.324 [2024-11-06 09:14:14.120630] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.324 "name": "raid_bdev1", 00:21:15.324 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:15.324 "strip_size_kb": 0, 00:21:15.324 "state": "online", 00:21:15.324 "raid_level": "raid1", 00:21:15.324 "superblock": true, 00:21:15.324 "num_base_bdevs": 4, 00:21:15.324 "num_base_bdevs_discovered": 2, 00:21:15.324 "num_base_bdevs_operational": 2, 00:21:15.324 "base_bdevs_list": [ 00:21:15.324 { 00:21:15.324 "name": null, 00:21:15.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.324 "is_configured": false, 00:21:15.324 "data_offset": 0, 00:21:15.324 "data_size": 63488 00:21:15.324 }, 00:21:15.324 { 00:21:15.324 "name": null, 00:21:15.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.324 "is_configured": false, 00:21:15.324 "data_offset": 2048, 00:21:15.324 "data_size": 63488 00:21:15.324 }, 00:21:15.324 { 00:21:15.324 "name": "BaseBdev3", 00:21:15.324 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:15.324 "is_configured": true, 00:21:15.324 "data_offset": 2048, 00:21:15.324 "data_size": 63488 00:21:15.324 }, 00:21:15.324 { 00:21:15.324 "name": "BaseBdev4", 00:21:15.324 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:15.324 "is_configured": true, 00:21:15.324 "data_offset": 2048, 00:21:15.324 "data_size": 63488 00:21:15.324 } 00:21:15.324 ] 00:21:15.324 }' 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.324 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.583 "name": "raid_bdev1", 00:21:15.583 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:15.583 "strip_size_kb": 0, 00:21:15.583 "state": "online", 00:21:15.583 "raid_level": "raid1", 00:21:15.583 "superblock": true, 00:21:15.583 "num_base_bdevs": 4, 00:21:15.583 "num_base_bdevs_discovered": 2, 00:21:15.583 "num_base_bdevs_operational": 2, 00:21:15.583 "base_bdevs_list": [ 00:21:15.583 { 00:21:15.583 "name": null, 00:21:15.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.583 "is_configured": false, 00:21:15.583 "data_offset": 0, 00:21:15.583 "data_size": 63488 00:21:15.583 }, 00:21:15.583 { 00:21:15.583 "name": null, 00:21:15.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.583 "is_configured": false, 00:21:15.583 "data_offset": 2048, 00:21:15.583 "data_size": 63488 00:21:15.583 }, 00:21:15.583 { 00:21:15.583 "name": "BaseBdev3", 00:21:15.583 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:15.583 "is_configured": true, 00:21:15.583 "data_offset": 2048, 00:21:15.583 "data_size": 63488 00:21:15.583 }, 00:21:15.583 { 00:21:15.583 "name": "BaseBdev4", 00:21:15.583 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:15.583 "is_configured": true, 00:21:15.583 "data_offset": 2048, 00:21:15.583 "data_size": 63488 00:21:15.583 } 00:21:15.583 ] 00:21:15.583 }' 00:21:15.583 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.842 [2024-11-06 09:14:14.698326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:15.842 [2024-11-06 09:14:14.698392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.842 [2024-11-06 09:14:14.698416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:15.842 [2024-11-06 09:14:14.698431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.842 [2024-11-06 09:14:14.698909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.842 [2024-11-06 09:14:14.698941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:15.842 [2024-11-06 09:14:14.699024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:15.842 [2024-11-06 09:14:14.699046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:15.842 [2024-11-06 09:14:14.699056] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:15.842 [2024-11-06 09:14:14.699080] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:15.842 BaseBdev1 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.842 09:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.782 "name": "raid_bdev1", 00:21:16.782 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:16.782 "strip_size_kb": 0, 00:21:16.782 "state": "online", 00:21:16.782 "raid_level": "raid1", 00:21:16.782 "superblock": true, 00:21:16.782 "num_base_bdevs": 4, 00:21:16.782 "num_base_bdevs_discovered": 2, 00:21:16.782 "num_base_bdevs_operational": 2, 00:21:16.782 "base_bdevs_list": [ 00:21:16.782 { 00:21:16.782 "name": null, 00:21:16.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.782 "is_configured": false, 00:21:16.782 "data_offset": 0, 00:21:16.782 "data_size": 63488 00:21:16.782 }, 00:21:16.782 { 00:21:16.782 "name": null, 00:21:16.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.782 "is_configured": false, 00:21:16.782 "data_offset": 2048, 00:21:16.782 "data_size": 63488 00:21:16.782 }, 00:21:16.782 { 00:21:16.782 "name": "BaseBdev3", 00:21:16.782 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:16.782 "is_configured": true, 00:21:16.782 "data_offset": 2048, 00:21:16.782 "data_size": 63488 00:21:16.782 }, 00:21:16.782 { 00:21:16.782 "name": "BaseBdev4", 00:21:16.782 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:16.782 "is_configured": true, 00:21:16.782 "data_offset": 2048, 00:21:16.782 "data_size": 63488 00:21:16.782 } 00:21:16.782 ] 00:21:16.782 }' 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.782 09:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.351 "name": "raid_bdev1", 00:21:17.351 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:17.351 "strip_size_kb": 0, 00:21:17.351 "state": "online", 00:21:17.351 "raid_level": "raid1", 00:21:17.351 "superblock": true, 00:21:17.351 "num_base_bdevs": 4, 00:21:17.351 "num_base_bdevs_discovered": 2, 00:21:17.351 "num_base_bdevs_operational": 2, 00:21:17.351 "base_bdevs_list": [ 00:21:17.351 { 00:21:17.351 "name": null, 00:21:17.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.351 "is_configured": false, 00:21:17.351 "data_offset": 0, 00:21:17.351 "data_size": 63488 00:21:17.351 }, 00:21:17.351 { 00:21:17.351 "name": null, 00:21:17.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.351 "is_configured": false, 00:21:17.351 "data_offset": 2048, 00:21:17.351 "data_size": 63488 00:21:17.351 }, 00:21:17.351 { 00:21:17.351 "name": "BaseBdev3", 00:21:17.351 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:17.351 "is_configured": true, 00:21:17.351 "data_offset": 2048, 00:21:17.351 "data_size": 63488 00:21:17.351 }, 00:21:17.351 { 00:21:17.351 "name": "BaseBdev4", 00:21:17.351 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:17.351 "is_configured": true, 00:21:17.351 "data_offset": 2048, 00:21:17.351 "data_size": 63488 00:21:17.351 } 00:21:17.351 ] 00:21:17.351 }' 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.351 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.351 [2024-11-06 09:14:16.248420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.351 [2024-11-06 09:14:16.248746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:17.351 [2024-11-06 09:14:16.248861] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:17.351 request: 00:21:17.351 { 00:21:17.351 "base_bdev": "BaseBdev1", 00:21:17.351 "raid_bdev": "raid_bdev1", 00:21:17.352 "method": "bdev_raid_add_base_bdev", 00:21:17.352 "req_id": 1 00:21:17.352 } 00:21:17.352 Got JSON-RPC error response 00:21:17.352 response: 00:21:17.352 { 00:21:17.352 "code": -22, 00:21:17.352 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:17.352 } 00:21:17.352 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:17.352 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:21:17.352 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.352 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.352 09:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.352 09:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.319 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.319 "name": "raid_bdev1", 00:21:18.319 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:18.319 "strip_size_kb": 0, 00:21:18.319 "state": "online", 00:21:18.319 "raid_level": "raid1", 00:21:18.319 "superblock": true, 00:21:18.319 "num_base_bdevs": 4, 00:21:18.319 "num_base_bdevs_discovered": 2, 00:21:18.319 "num_base_bdevs_operational": 2, 00:21:18.319 "base_bdevs_list": [ 00:21:18.319 { 00:21:18.319 "name": null, 00:21:18.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.319 "is_configured": false, 00:21:18.319 "data_offset": 0, 00:21:18.319 "data_size": 63488 00:21:18.319 }, 00:21:18.319 { 00:21:18.319 "name": null, 00:21:18.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.319 "is_configured": false, 00:21:18.319 "data_offset": 2048, 00:21:18.319 "data_size": 63488 00:21:18.320 }, 00:21:18.320 { 00:21:18.320 "name": "BaseBdev3", 00:21:18.320 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:18.320 "is_configured": true, 00:21:18.320 "data_offset": 2048, 00:21:18.320 "data_size": 63488 00:21:18.320 }, 00:21:18.320 { 00:21:18.320 "name": "BaseBdev4", 00:21:18.320 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:18.320 "is_configured": true, 00:21:18.320 "data_offset": 2048, 00:21:18.320 "data_size": 63488 00:21:18.320 } 00:21:18.320 ] 00:21:18.320 }' 00:21:18.320 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.320 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.895 "name": "raid_bdev1", 00:21:18.895 "uuid": "40c2f6a2-7ef0-4e29-8f60-2d66dfeb3636", 00:21:18.895 "strip_size_kb": 0, 00:21:18.895 "state": "online", 00:21:18.895 "raid_level": "raid1", 00:21:18.895 "superblock": true, 00:21:18.895 "num_base_bdevs": 4, 00:21:18.895 "num_base_bdevs_discovered": 2, 00:21:18.895 "num_base_bdevs_operational": 2, 00:21:18.895 "base_bdevs_list": [ 00:21:18.895 { 00:21:18.895 "name": null, 00:21:18.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.895 "is_configured": false, 00:21:18.895 "data_offset": 0, 00:21:18.895 "data_size": 63488 00:21:18.895 }, 00:21:18.895 { 00:21:18.895 "name": null, 00:21:18.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.895 "is_configured": false, 00:21:18.895 "data_offset": 2048, 00:21:18.895 "data_size": 63488 00:21:18.895 }, 00:21:18.895 { 00:21:18.895 "name": "BaseBdev3", 00:21:18.895 "uuid": "583036f2-a119-534c-afc6-1de80b1440fd", 00:21:18.895 "is_configured": true, 00:21:18.895 "data_offset": 2048, 00:21:18.895 "data_size": 63488 00:21:18.895 }, 00:21:18.895 { 00:21:18.895 "name": "BaseBdev4", 00:21:18.895 "uuid": "2a84ce4c-2436-509e-9402-88ac5c97214d", 00:21:18.895 "is_configured": true, 00:21:18.895 "data_offset": 2048, 00:21:18.895 "data_size": 63488 00:21:18.895 } 00:21:18.895 ] 00:21:18.895 }' 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77721 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 77721 ']' 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 77721 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.895 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77721 00:21:18.895 killing process with pid 77721 00:21:18.895 Received shutdown signal, test time was about 60.000000 seconds 00:21:18.895 00:21:18.895 Latency(us) 00:21:18.895 [2024-11-06T09:14:17.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.895 [2024-11-06T09:14:17.935Z] =================================================================================================================== 00:21:18.895 [2024-11-06T09:14:17.935Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.896 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:18.896 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:18.896 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77721' 00:21:18.896 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 77721 00:21:18.896 [2024-11-06 09:14:17.847028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.896 09:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 77721 00:21:18.896 [2024-11-06 09:14:17.847154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.896 [2024-11-06 09:14:17.847224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.896 [2024-11-06 09:14:17.847235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:19.464 [2024-11-06 09:14:18.346020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:20.841 00:21:20.841 real 0m25.014s 00:21:20.841 user 0m30.036s 00:21:20.841 sys 0m4.321s 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.841 ************************************ 00:21:20.841 END TEST raid_rebuild_test_sb 00:21:20.841 ************************************ 00:21:20.841 09:14:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:21:20.841 09:14:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:20.841 09:14:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:20.841 09:14:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:20.841 ************************************ 00:21:20.841 START TEST raid_rebuild_test_io 00:21:20.841 ************************************ 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78480 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78480 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78480 ']' 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.841 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:20.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.842 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.842 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:20.842 09:14:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:20.842 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:20.842 Zero copy mechanism will not be used. 00:21:20.842 [2024-11-06 09:14:19.643545] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:21:20.842 [2024-11-06 09:14:19.643676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78480 ] 00:21:20.842 [2024-11-06 09:14:19.816155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.100 [2024-11-06 09:14:19.933921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.358 [2024-11-06 09:14:20.149558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.358 [2024-11-06 09:14:20.149594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.618 BaseBdev1_malloc 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.618 [2024-11-06 09:14:20.526710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:21.618 [2024-11-06 09:14:20.526785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.618 [2024-11-06 09:14:20.526813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:21.618 [2024-11-06 09:14:20.526828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.618 [2024-11-06 09:14:20.529202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.618 [2024-11-06 09:14:20.529245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.618 BaseBdev1 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.618 BaseBdev2_malloc 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.618 [2024-11-06 09:14:20.585763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:21.618 [2024-11-06 09:14:20.585827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.618 [2024-11-06 09:14:20.585850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:21.618 [2024-11-06 09:14:20.585866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.618 [2024-11-06 09:14:20.588290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.618 [2024-11-06 09:14:20.588329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:21.618 BaseBdev2 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.618 BaseBdev3_malloc 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.618 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.618 [2024-11-06 09:14:20.652821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:21.618 [2024-11-06 09:14:20.652879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.618 [2024-11-06 09:14:20.652903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:21.618 [2024-11-06 09:14:20.652917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.618 [2024-11-06 09:14:20.655322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.618 [2024-11-06 09:14:20.655366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:21.919 BaseBdev3 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 BaseBdev4_malloc 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 [2024-11-06 09:14:20.711935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:21.919 [2024-11-06 09:14:20.711993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.919 [2024-11-06 09:14:20.712016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:21.919 [2024-11-06 09:14:20.712030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.919 [2024-11-06 09:14:20.714356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.919 [2024-11-06 09:14:20.714399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:21.919 BaseBdev4 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 spare_malloc 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 spare_delay 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 [2024-11-06 09:14:20.780171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.919 [2024-11-06 09:14:20.780232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.919 [2024-11-06 09:14:20.780253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:21.919 [2024-11-06 09:14:20.780268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.919 [2024-11-06 09:14:20.782618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.919 [2024-11-06 09:14:20.782660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.919 spare 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 [2024-11-06 09:14:20.792217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:21.919 [2024-11-06 09:14:20.794286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.919 [2024-11-06 09:14:20.794356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:21.919 [2024-11-06 09:14:20.794408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:21.919 [2024-11-06 09:14:20.794487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:21.919 [2024-11-06 09:14:20.794502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:21.919 [2024-11-06 09:14:20.794762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:21.919 [2024-11-06 09:14:20.794930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:21.919 [2024-11-06 09:14:20.794961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:21.919 [2024-11-06 09:14:20.795108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.919 "name": "raid_bdev1", 00:21:21.919 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:21.919 "strip_size_kb": 0, 00:21:21.919 "state": "online", 00:21:21.919 "raid_level": "raid1", 00:21:21.919 "superblock": false, 00:21:21.919 "num_base_bdevs": 4, 00:21:21.919 "num_base_bdevs_discovered": 4, 00:21:21.919 "num_base_bdevs_operational": 4, 00:21:21.919 "base_bdevs_list": [ 00:21:21.919 { 00:21:21.919 "name": "BaseBdev1", 00:21:21.919 "uuid": "afffc4c5-45cf-58ef-b8d4-6926ebb7ffaa", 00:21:21.919 "is_configured": true, 00:21:21.919 "data_offset": 0, 00:21:21.919 "data_size": 65536 00:21:21.919 }, 00:21:21.919 { 00:21:21.919 "name": "BaseBdev2", 00:21:21.919 "uuid": "c1cedc7a-d504-5c18-9a71-1fc800d0bf51", 00:21:21.919 "is_configured": true, 00:21:21.919 "data_offset": 0, 00:21:21.919 "data_size": 65536 00:21:21.919 }, 00:21:21.919 { 00:21:21.919 "name": "BaseBdev3", 00:21:21.919 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:21.919 "is_configured": true, 00:21:21.919 "data_offset": 0, 00:21:21.919 "data_size": 65536 00:21:21.919 }, 00:21:21.919 { 00:21:21.919 "name": "BaseBdev4", 00:21:21.919 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:21.919 "is_configured": true, 00:21:21.919 "data_offset": 0, 00:21:21.919 "data_size": 65536 00:21:21.919 } 00:21:21.919 ] 00:21:21.919 }' 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.919 09:14:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 [2024-11-06 09:14:21.251886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 [2024-11-06 09:14:21.343423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.488 "name": "raid_bdev1", 00:21:22.488 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:22.488 "strip_size_kb": 0, 00:21:22.488 "state": "online", 00:21:22.488 "raid_level": "raid1", 00:21:22.488 "superblock": false, 00:21:22.488 "num_base_bdevs": 4, 00:21:22.488 "num_base_bdevs_discovered": 3, 00:21:22.488 "num_base_bdevs_operational": 3, 00:21:22.488 "base_bdevs_list": [ 00:21:22.488 { 00:21:22.488 "name": null, 00:21:22.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.488 "is_configured": false, 00:21:22.488 "data_offset": 0, 00:21:22.488 "data_size": 65536 00:21:22.488 }, 00:21:22.488 { 00:21:22.488 "name": "BaseBdev2", 00:21:22.488 "uuid": "c1cedc7a-d504-5c18-9a71-1fc800d0bf51", 00:21:22.488 "is_configured": true, 00:21:22.488 "data_offset": 0, 00:21:22.488 "data_size": 65536 00:21:22.488 }, 00:21:22.488 { 00:21:22.488 "name": "BaseBdev3", 00:21:22.488 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:22.488 "is_configured": true, 00:21:22.488 "data_offset": 0, 00:21:22.488 "data_size": 65536 00:21:22.488 }, 00:21:22.488 { 00:21:22.488 "name": "BaseBdev4", 00:21:22.488 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:22.488 "is_configured": true, 00:21:22.488 "data_offset": 0, 00:21:22.488 "data_size": 65536 00:21:22.488 } 00:21:22.488 ] 00:21:22.488 }' 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.488 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 [2024-11-06 09:14:21.435367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:22.488 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:22.488 Zero copy mechanism will not be used. 00:21:22.488 Running I/O for 60 seconds... 00:21:22.747 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.747 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.747 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.006 [2024-11-06 09:14:21.787242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.006 09:14:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.006 09:14:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:23.006 [2024-11-06 09:14:21.855536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:23.006 [2024-11-06 09:14:21.857780] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.006 [2024-11-06 09:14:21.980949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:23.006 [2024-11-06 09:14:21.982364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:23.264 [2024-11-06 09:14:22.222564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:24.089 169.00 IOPS, 507.00 MiB/s [2024-11-06T09:14:23.129Z] 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.089 "name": "raid_bdev1", 00:21:24.089 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:24.089 "strip_size_kb": 0, 00:21:24.089 "state": "online", 00:21:24.089 "raid_level": "raid1", 00:21:24.089 "superblock": false, 00:21:24.089 "num_base_bdevs": 4, 00:21:24.089 "num_base_bdevs_discovered": 4, 00:21:24.089 "num_base_bdevs_operational": 4, 00:21:24.089 "process": { 00:21:24.089 "type": "rebuild", 00:21:24.089 "target": "spare", 00:21:24.089 "progress": { 00:21:24.089 "blocks": 12288, 00:21:24.089 "percent": 18 00:21:24.089 } 00:21:24.089 }, 00:21:24.089 "base_bdevs_list": [ 00:21:24.089 { 00:21:24.089 "name": "spare", 00:21:24.089 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:24.089 "is_configured": true, 00:21:24.089 "data_offset": 0, 00:21:24.089 "data_size": 65536 00:21:24.089 }, 00:21:24.089 { 00:21:24.089 "name": "BaseBdev2", 00:21:24.089 "uuid": "c1cedc7a-d504-5c18-9a71-1fc800d0bf51", 00:21:24.089 "is_configured": true, 00:21:24.089 "data_offset": 0, 00:21:24.089 "data_size": 65536 00:21:24.089 }, 00:21:24.089 { 00:21:24.089 "name": "BaseBdev3", 00:21:24.089 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:24.089 "is_configured": true, 00:21:24.089 "data_offset": 0, 00:21:24.089 "data_size": 65536 00:21:24.089 }, 00:21:24.089 { 00:21:24.089 "name": "BaseBdev4", 00:21:24.089 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:24.089 "is_configured": true, 00:21:24.089 "data_offset": 0, 00:21:24.089 "data_size": 65536 00:21:24.089 } 00:21:24.089 ] 00:21:24.089 }' 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.089 [2024-11-06 09:14:22.943692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.089 09:14:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.089 [2024-11-06 09:14:22.950138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.089 [2024-11-06 09:14:23.061326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:24.347 [2024-11-06 09:14:23.161224] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:24.347 [2024-11-06 09:14:23.163606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.347 [2024-11-06 09:14:23.163660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.347 [2024-11-06 09:14:23.163673] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:24.347 [2024-11-06 09:14:23.193802] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.347 "name": "raid_bdev1", 00:21:24.347 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:24.347 "strip_size_kb": 0, 00:21:24.347 "state": "online", 00:21:24.347 "raid_level": "raid1", 00:21:24.347 "superblock": false, 00:21:24.347 "num_base_bdevs": 4, 00:21:24.347 "num_base_bdevs_discovered": 3, 00:21:24.347 "num_base_bdevs_operational": 3, 00:21:24.347 "base_bdevs_list": [ 00:21:24.347 { 00:21:24.347 "name": null, 00:21:24.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.347 "is_configured": false, 00:21:24.347 "data_offset": 0, 00:21:24.347 "data_size": 65536 00:21:24.347 }, 00:21:24.347 { 00:21:24.347 "name": "BaseBdev2", 00:21:24.347 "uuid": "c1cedc7a-d504-5c18-9a71-1fc800d0bf51", 00:21:24.347 "is_configured": true, 00:21:24.347 "data_offset": 0, 00:21:24.347 "data_size": 65536 00:21:24.347 }, 00:21:24.347 { 00:21:24.347 "name": "BaseBdev3", 00:21:24.347 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:24.347 "is_configured": true, 00:21:24.347 "data_offset": 0, 00:21:24.347 "data_size": 65536 00:21:24.347 }, 00:21:24.347 { 00:21:24.347 "name": "BaseBdev4", 00:21:24.347 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:24.347 "is_configured": true, 00:21:24.347 "data_offset": 0, 00:21:24.347 "data_size": 65536 00:21:24.347 } 00:21:24.347 ] 00:21:24.347 }' 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.347 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.605 134.50 IOPS, 403.50 MiB/s [2024-11-06T09:14:23.645Z] 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.605 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.605 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.605 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.605 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.863 "name": "raid_bdev1", 00:21:24.863 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:24.863 "strip_size_kb": 0, 00:21:24.863 "state": "online", 00:21:24.863 "raid_level": "raid1", 00:21:24.863 "superblock": false, 00:21:24.863 "num_base_bdevs": 4, 00:21:24.863 "num_base_bdevs_discovered": 3, 00:21:24.863 "num_base_bdevs_operational": 3, 00:21:24.863 "base_bdevs_list": [ 00:21:24.863 { 00:21:24.863 "name": null, 00:21:24.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.863 "is_configured": false, 00:21:24.863 "data_offset": 0, 00:21:24.863 "data_size": 65536 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "name": "BaseBdev2", 00:21:24.863 "uuid": "c1cedc7a-d504-5c18-9a71-1fc800d0bf51", 00:21:24.863 "is_configured": true, 00:21:24.863 "data_offset": 0, 00:21:24.863 "data_size": 65536 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "name": "BaseBdev3", 00:21:24.863 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:24.863 "is_configured": true, 00:21:24.863 "data_offset": 0, 00:21:24.863 "data_size": 65536 00:21:24.863 }, 00:21:24.863 { 00:21:24.863 "name": "BaseBdev4", 00:21:24.863 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:24.863 "is_configured": true, 00:21:24.863 "data_offset": 0, 00:21:24.863 "data_size": 65536 00:21:24.863 } 00:21:24.863 ] 00:21:24.863 }' 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.863 [2024-11-06 09:14:23.767151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.863 09:14:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:24.863 [2024-11-06 09:14:23.833146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:24.863 [2024-11-06 09:14:23.835360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.124 [2024-11-06 09:14:23.951485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:25.124 [2024-11-06 09:14:23.952898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:25.124 [2024-11-06 09:14:24.154894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:25.124 [2024-11-06 09:14:24.155118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:25.692 136.33 IOPS, 409.00 MiB/s [2024-11-06T09:14:24.732Z] [2024-11-06 09:14:24.529796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:25.692 [2024-11-06 09:14:24.530531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.951 "name": "raid_bdev1", 00:21:25.951 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:25.951 "strip_size_kb": 0, 00:21:25.951 "state": "online", 00:21:25.951 "raid_level": "raid1", 00:21:25.951 "superblock": false, 00:21:25.951 "num_base_bdevs": 4, 00:21:25.951 "num_base_bdevs_discovered": 4, 00:21:25.951 "num_base_bdevs_operational": 4, 00:21:25.951 "process": { 00:21:25.951 "type": "rebuild", 00:21:25.951 "target": "spare", 00:21:25.951 "progress": { 00:21:25.951 "blocks": 12288, 00:21:25.951 "percent": 18 00:21:25.951 } 00:21:25.951 }, 00:21:25.951 "base_bdevs_list": [ 00:21:25.951 { 00:21:25.951 "name": "spare", 00:21:25.951 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:25.951 "is_configured": true, 00:21:25.951 "data_offset": 0, 00:21:25.951 "data_size": 65536 00:21:25.951 }, 00:21:25.951 { 00:21:25.951 "name": "BaseBdev2", 00:21:25.951 "uuid": "c1cedc7a-d504-5c18-9a71-1fc800d0bf51", 00:21:25.951 "is_configured": true, 00:21:25.951 "data_offset": 0, 00:21:25.951 "data_size": 65536 00:21:25.951 }, 00:21:25.951 { 00:21:25.951 "name": "BaseBdev3", 00:21:25.951 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:25.951 "is_configured": true, 00:21:25.951 "data_offset": 0, 00:21:25.951 "data_size": 65536 00:21:25.951 }, 00:21:25.951 { 00:21:25.951 "name": "BaseBdev4", 00:21:25.951 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:25.951 "is_configured": true, 00:21:25.951 "data_offset": 0, 00:21:25.951 "data_size": 65536 00:21:25.951 } 00:21:25.951 ] 00:21:25.951 }' 00:21:25.951 [2024-11-06 09:14:24.862848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.951 09:14:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:25.951 [2024-11-06 09:14:24.962271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:25.951 [2024-11-06 09:14:24.966018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:26.210 [2024-11-06 09:14:24.995695] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:26.210 [2024-11-06 09:14:24.995738] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:26.210 [2024-11-06 09:14:25.003868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:26.210 "name": "raid_bdev1", 00:21:26.210 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:26.210 "strip_size_kb": 0, 00:21:26.210 "state": "online", 00:21:26.210 "raid_level": "raid1", 00:21:26.210 "superblock": false, 00:21:26.210 "num_base_bdevs": 4, 00:21:26.210 "num_base_bdevs_discovered": 3, 00:21:26.210 "num_base_bdevs_operational": 3, 00:21:26.210 "process": { 00:21:26.210 "type": "rebuild", 00:21:26.210 "target": "spare", 00:21:26.210 "progress": { 00:21:26.210 "blocks": 16384, 00:21:26.210 "percent": 25 00:21:26.210 } 00:21:26.210 }, 00:21:26.210 "base_bdevs_list": [ 00:21:26.210 { 00:21:26.210 "name": "spare", 00:21:26.210 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:26.210 "is_configured": true, 00:21:26.210 "data_offset": 0, 00:21:26.210 "data_size": 65536 00:21:26.210 }, 00:21:26.210 { 00:21:26.210 "name": null, 00:21:26.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.210 "is_configured": false, 00:21:26.210 "data_offset": 0, 00:21:26.210 "data_size": 65536 00:21:26.210 }, 00:21:26.210 { 00:21:26.210 "name": "BaseBdev3", 00:21:26.210 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:26.210 "is_configured": true, 00:21:26.210 "data_offset": 0, 00:21:26.210 "data_size": 65536 00:21:26.210 }, 00:21:26.210 { 00:21:26.210 "name": "BaseBdev4", 00:21:26.210 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:26.210 "is_configured": true, 00:21:26.210 "data_offset": 0, 00:21:26.210 "data_size": 65536 00:21:26.210 } 00:21:26.210 ] 00:21:26.210 }' 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:26.210 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:26.211 "name": "raid_bdev1", 00:21:26.211 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:26.211 "strip_size_kb": 0, 00:21:26.211 "state": "online", 00:21:26.211 "raid_level": "raid1", 00:21:26.211 "superblock": false, 00:21:26.211 "num_base_bdevs": 4, 00:21:26.211 "num_base_bdevs_discovered": 3, 00:21:26.211 "num_base_bdevs_operational": 3, 00:21:26.211 "process": { 00:21:26.211 "type": "rebuild", 00:21:26.211 "target": "spare", 00:21:26.211 "progress": { 00:21:26.211 "blocks": 16384, 00:21:26.211 "percent": 25 00:21:26.211 } 00:21:26.211 }, 00:21:26.211 "base_bdevs_list": [ 00:21:26.211 { 00:21:26.211 "name": "spare", 00:21:26.211 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:26.211 "is_configured": true, 00:21:26.211 "data_offset": 0, 00:21:26.211 "data_size": 65536 00:21:26.211 }, 00:21:26.211 { 00:21:26.211 "name": null, 00:21:26.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.211 "is_configured": false, 00:21:26.211 "data_offset": 0, 00:21:26.211 "data_size": 65536 00:21:26.211 }, 00:21:26.211 { 00:21:26.211 "name": "BaseBdev3", 00:21:26.211 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:26.211 "is_configured": true, 00:21:26.211 "data_offset": 0, 00:21:26.211 "data_size": 65536 00:21:26.211 }, 00:21:26.211 { 00:21:26.211 "name": "BaseBdev4", 00:21:26.211 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:26.211 "is_configured": true, 00:21:26.211 "data_offset": 0, 00:21:26.211 "data_size": 65536 00:21:26.211 } 00:21:26.211 ] 00:21:26.211 }' 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.211 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:26.469 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.469 09:14:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:26.469 [2024-11-06 09:14:25.352138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:26.727 120.50 IOPS, 361.50 MiB/s [2024-11-06T09:14:25.767Z] [2024-11-06 09:14:25.689539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.294 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.294 "name": "raid_bdev1", 00:21:27.294 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:27.294 "strip_size_kb": 0, 00:21:27.294 "state": "online", 00:21:27.294 "raid_level": "raid1", 00:21:27.294 "superblock": false, 00:21:27.294 "num_base_bdevs": 4, 00:21:27.294 "num_base_bdevs_discovered": 3, 00:21:27.294 "num_base_bdevs_operational": 3, 00:21:27.294 "process": { 00:21:27.294 "type": "rebuild", 00:21:27.294 "target": "spare", 00:21:27.294 "progress": { 00:21:27.295 "blocks": 36864, 00:21:27.295 "percent": 56 00:21:27.295 } 00:21:27.295 }, 00:21:27.295 "base_bdevs_list": [ 00:21:27.295 { 00:21:27.295 "name": "spare", 00:21:27.295 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:27.295 "is_configured": true, 00:21:27.295 "data_offset": 0, 00:21:27.295 "data_size": 65536 00:21:27.295 }, 00:21:27.295 { 00:21:27.295 "name": null, 00:21:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.295 "is_configured": false, 00:21:27.295 "data_offset": 0, 00:21:27.295 "data_size": 65536 00:21:27.295 }, 00:21:27.295 { 00:21:27.295 "name": "BaseBdev3", 00:21:27.295 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:27.295 "is_configured": true, 00:21:27.295 "data_offset": 0, 00:21:27.295 "data_size": 65536 00:21:27.295 }, 00:21:27.295 { 00:21:27.295 "name": "BaseBdev4", 00:21:27.295 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:27.295 "is_configured": true, 00:21:27.295 "data_offset": 0, 00:21:27.295 "data_size": 65536 00:21:27.295 } 00:21:27.295 ] 00:21:27.295 }' 00:21:27.295 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.554 [2024-11-06 09:14:26.360541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:27.554 [2024-11-06 09:14:26.361094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:27.554 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.554 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.554 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.554 09:14:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:27.813 108.40 IOPS, 325.20 MiB/s [2024-11-06T09:14:26.853Z] [2024-11-06 09:14:26.593918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:28.379 [2024-11-06 09:14:27.171537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.637 99.83 IOPS, 299.50 MiB/s [2024-11-06T09:14:27.677Z] 09:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.637 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.637 "name": "raid_bdev1", 00:21:28.637 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:28.637 "strip_size_kb": 0, 00:21:28.637 "state": "online", 00:21:28.637 "raid_level": "raid1", 00:21:28.637 "superblock": false, 00:21:28.637 "num_base_bdevs": 4, 00:21:28.637 "num_base_bdevs_discovered": 3, 00:21:28.637 "num_base_bdevs_operational": 3, 00:21:28.637 "process": { 00:21:28.637 "type": "rebuild", 00:21:28.637 "target": "spare", 00:21:28.637 "progress": { 00:21:28.637 "blocks": 55296, 00:21:28.637 "percent": 84 00:21:28.637 } 00:21:28.637 }, 00:21:28.637 "base_bdevs_list": [ 00:21:28.637 { 00:21:28.637 "name": "spare", 00:21:28.637 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:28.637 "is_configured": true, 00:21:28.637 "data_offset": 0, 00:21:28.638 "data_size": 65536 00:21:28.638 }, 00:21:28.638 { 00:21:28.638 "name": null, 00:21:28.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.638 "is_configured": false, 00:21:28.638 "data_offset": 0, 00:21:28.638 "data_size": 65536 00:21:28.638 }, 00:21:28.638 { 00:21:28.638 "name": "BaseBdev3", 00:21:28.638 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:28.638 "is_configured": true, 00:21:28.638 "data_offset": 0, 00:21:28.638 "data_size": 65536 00:21:28.638 }, 00:21:28.638 { 00:21:28.638 "name": "BaseBdev4", 00:21:28.638 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:28.638 "is_configured": true, 00:21:28.638 "data_offset": 0, 00:21:28.638 "data_size": 65536 00:21:28.638 } 00:21:28.638 ] 00:21:28.638 }' 00:21:28.638 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.638 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.638 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.638 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.638 09:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:29.203 [2024-11-06 09:14:27.942425] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:29.203 [2024-11-06 09:14:28.048143] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:29.203 [2024-11-06 09:14:28.052358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.719 90.86 IOPS, 272.57 MiB/s [2024-11-06T09:14:28.759Z] 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.719 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.720 "name": "raid_bdev1", 00:21:29.720 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:29.720 "strip_size_kb": 0, 00:21:29.720 "state": "online", 00:21:29.720 "raid_level": "raid1", 00:21:29.720 "superblock": false, 00:21:29.720 "num_base_bdevs": 4, 00:21:29.720 "num_base_bdevs_discovered": 3, 00:21:29.720 "num_base_bdevs_operational": 3, 00:21:29.720 "base_bdevs_list": [ 00:21:29.720 { 00:21:29.720 "name": "spare", 00:21:29.720 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:29.720 "is_configured": true, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 }, 00:21:29.720 { 00:21:29.720 "name": null, 00:21:29.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.720 "is_configured": false, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 }, 00:21:29.720 { 00:21:29.720 "name": "BaseBdev3", 00:21:29.720 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:29.720 "is_configured": true, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 }, 00:21:29.720 { 00:21:29.720 "name": "BaseBdev4", 00:21:29.720 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:29.720 "is_configured": true, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 } 00:21:29.720 ] 00:21:29.720 }' 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.720 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.720 "name": "raid_bdev1", 00:21:29.720 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:29.720 "strip_size_kb": 0, 00:21:29.720 "state": "online", 00:21:29.720 "raid_level": "raid1", 00:21:29.720 "superblock": false, 00:21:29.720 "num_base_bdevs": 4, 00:21:29.720 "num_base_bdevs_discovered": 3, 00:21:29.720 "num_base_bdevs_operational": 3, 00:21:29.720 "base_bdevs_list": [ 00:21:29.720 { 00:21:29.720 "name": "spare", 00:21:29.720 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:29.720 "is_configured": true, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 }, 00:21:29.720 { 00:21:29.720 "name": null, 00:21:29.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.720 "is_configured": false, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 }, 00:21:29.720 { 00:21:29.720 "name": "BaseBdev3", 00:21:29.720 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:29.720 "is_configured": true, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 }, 00:21:29.720 { 00:21:29.720 "name": "BaseBdev4", 00:21:29.720 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:29.720 "is_configured": true, 00:21:29.720 "data_offset": 0, 00:21:29.720 "data_size": 65536 00:21:29.720 } 00:21:29.720 ] 00:21:29.720 }' 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.978 "name": "raid_bdev1", 00:21:29.978 "uuid": "2c3918d0-9c93-4611-9e99-1e677caafdfd", 00:21:29.978 "strip_size_kb": 0, 00:21:29.978 "state": "online", 00:21:29.978 "raid_level": "raid1", 00:21:29.978 "superblock": false, 00:21:29.978 "num_base_bdevs": 4, 00:21:29.978 "num_base_bdevs_discovered": 3, 00:21:29.978 "num_base_bdevs_operational": 3, 00:21:29.978 "base_bdevs_list": [ 00:21:29.978 { 00:21:29.978 "name": "spare", 00:21:29.978 "uuid": "a31daa40-3aa4-5d31-a126-4ff0bb869101", 00:21:29.978 "is_configured": true, 00:21:29.978 "data_offset": 0, 00:21:29.978 "data_size": 65536 00:21:29.978 }, 00:21:29.978 { 00:21:29.978 "name": null, 00:21:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.978 "is_configured": false, 00:21:29.978 "data_offset": 0, 00:21:29.978 "data_size": 65536 00:21:29.978 }, 00:21:29.978 { 00:21:29.978 "name": "BaseBdev3", 00:21:29.978 "uuid": "8d12df77-817e-594f-b05c-f74a09e31110", 00:21:29.978 "is_configured": true, 00:21:29.978 "data_offset": 0, 00:21:29.978 "data_size": 65536 00:21:29.978 }, 00:21:29.978 { 00:21:29.978 "name": "BaseBdev4", 00:21:29.978 "uuid": "0bf4b180-5382-5139-8d5f-b6b64fba7892", 00:21:29.978 "is_configured": true, 00:21:29.978 "data_offset": 0, 00:21:29.978 "data_size": 65536 00:21:29.978 } 00:21:29.978 ] 00:21:29.978 }' 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.978 09:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.236 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:30.236 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.236 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.236 [2024-11-06 09:14:29.257177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:30.236 [2024-11-06 09:14:29.257364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.494 00:21:30.494 Latency(us) 00:21:30.494 [2024-11-06T09:14:29.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.494 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:30.494 raid_bdev1 : 7.92 84.05 252.14 0.00 0.00 16905.80 307.61 117069.93 00:21:30.494 [2024-11-06T09:14:29.534Z] =================================================================================================================== 00:21:30.494 [2024-11-06T09:14:29.534Z] Total : 84.05 252.14 0.00 0.00 16905.80 307.61 117069.93 00:21:30.494 [2024-11-06 09:14:29.371967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.494 [2024-11-06 09:14:29.372030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.494 [2024-11-06 09:14:29.372129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.494 [2024-11-06 09:14:29.372148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:30.494 { 00:21:30.494 "results": [ 00:21:30.494 { 00:21:30.494 "job": "raid_bdev1", 00:21:30.494 "core_mask": "0x1", 00:21:30.494 "workload": "randrw", 00:21:30.494 "percentage": 50, 00:21:30.494 "status": "finished", 00:21:30.494 "queue_depth": 2, 00:21:30.494 "io_size": 3145728, 00:21:30.494 "runtime": 7.924195, 00:21:30.494 "iops": 84.04639209408653, 00:21:30.494 "mibps": 252.1391762822596, 00:21:30.494 "io_failed": 0, 00:21:30.494 "io_timeout": 0, 00:21:30.494 "avg_latency_us": 16905.80072120313, 00:21:30.494 "min_latency_us": 307.61124497991966, 00:21:30.494 "max_latency_us": 117069.93092369477 00:21:30.494 } 00:21:30.494 ], 00:21:30.494 "core_count": 1 00:21:30.494 } 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:30.494 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:30.753 /dev/nbd0 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:30.753 1+0 records in 00:21:30.753 1+0 records out 00:21:30.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406617 s, 10.1 MB/s 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:30.753 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:30.754 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:31.011 /dev/nbd1 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:31.011 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:31.011 1+0 records in 00:21:31.011 1+0 records out 00:21:31.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389734 s, 10.5 MB/s 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:31.012 09:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.270 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:31.528 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:31.786 /dev/nbd1 00:21:31.786 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:31.786 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:31.786 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:31.786 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:31.786 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:31.787 1+0 records in 00:21:31.787 1+0 records out 00:21:31.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387737 s, 10.6 MB/s 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.787 09:14:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:32.045 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78480 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78480 ']' 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78480 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:32.303 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78480 00:21:32.303 killing process with pid 78480 00:21:32.303 Received shutdown signal, test time was about 9.901105 seconds 00:21:32.303 00:21:32.303 Latency(us) 00:21:32.303 [2024-11-06T09:14:31.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.304 [2024-11-06T09:14:31.344Z] =================================================================================================================== 00:21:32.304 [2024-11-06T09:14:31.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.304 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:32.304 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:32.304 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78480' 00:21:32.304 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78480 00:21:32.304 [2024-11-06 09:14:31.322691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:32.304 09:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78480 00:21:32.872 [2024-11-06 09:14:31.746917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.250 09:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:34.250 00:21:34.250 real 0m13.382s 00:21:34.250 user 0m16.690s 00:21:34.250 sys 0m2.136s 00:21:34.250 09:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.250 09:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.250 ************************************ 00:21:34.250 END TEST raid_rebuild_test_io 00:21:34.250 ************************************ 00:21:34.250 09:14:32 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:34.250 09:14:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:34.250 09:14:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.250 09:14:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.250 ************************************ 00:21:34.250 START TEST raid_rebuild_test_sb_io 00:21:34.250 ************************************ 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78889 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78889 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 78889 ']' 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:34.250 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.250 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:34.250 Zero copy mechanism will not be used. 00:21:34.250 [2024-11-06 09:14:33.110666] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:21:34.250 [2024-11-06 09:14:33.110791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78889 ] 00:21:34.250 [2024-11-06 09:14:33.287756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.508 [2024-11-06 09:14:33.407779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.767 [2024-11-06 09:14:33.619111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.767 [2024-11-06 09:14:33.619366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 BaseBdev1_malloc 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 [2024-11-06 09:14:33.987106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:35.026 [2024-11-06 09:14:33.987179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.026 [2024-11-06 09:14:33.987201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:35.026 [2024-11-06 09:14:33.987216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.026 [2024-11-06 09:14:33.989571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.026 [2024-11-06 09:14:33.989740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:35.026 BaseBdev1 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.026 09:14:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 BaseBdev2_malloc 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 [2024-11-06 09:14:34.044330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:35.026 [2024-11-06 09:14:34.044519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.026 [2024-11-06 09:14:34.044547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:35.026 [2024-11-06 09:14:34.044565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.026 [2024-11-06 09:14:34.046988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.026 [2024-11-06 09:14:34.047033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:35.026 BaseBdev2 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.026 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.284 BaseBdev3_malloc 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 [2024-11-06 09:14:34.114336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:35.285 [2024-11-06 09:14:34.114507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.285 [2024-11-06 09:14:34.114536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:35.285 [2024-11-06 09:14:34.114551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.285 [2024-11-06 09:14:34.116863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.285 [2024-11-06 09:14:34.116909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:35.285 BaseBdev3 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 BaseBdev4_malloc 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 [2024-11-06 09:14:34.171832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:35.285 [2024-11-06 09:14:34.171888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.285 [2024-11-06 09:14:34.171910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:35.285 [2024-11-06 09:14:34.171924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.285 [2024-11-06 09:14:34.174215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.285 [2024-11-06 09:14:34.174377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:35.285 BaseBdev4 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 spare_malloc 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 spare_delay 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 [2024-11-06 09:14:34.240926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:35.285 [2024-11-06 09:14:34.240989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.285 [2024-11-06 09:14:34.241012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:35.285 [2024-11-06 09:14:34.241026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.285 [2024-11-06 09:14:34.243375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.285 [2024-11-06 09:14:34.243418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:35.285 spare 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 [2024-11-06 09:14:34.252963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.285 [2024-11-06 09:14:34.255212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.285 [2024-11-06 09:14:34.255300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.285 [2024-11-06 09:14:34.255357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:35.285 [2024-11-06 09:14:34.255534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:35.285 [2024-11-06 09:14:34.255554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:35.285 [2024-11-06 09:14:34.255816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:35.285 [2024-11-06 09:14:34.256000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:35.285 [2024-11-06 09:14:34.256011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:35.285 [2024-11-06 09:14:34.256173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.285 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.285 "name": "raid_bdev1", 00:21:35.285 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:35.285 "strip_size_kb": 0, 00:21:35.285 "state": "online", 00:21:35.285 "raid_level": "raid1", 00:21:35.285 "superblock": true, 00:21:35.285 "num_base_bdevs": 4, 00:21:35.285 "num_base_bdevs_discovered": 4, 00:21:35.285 "num_base_bdevs_operational": 4, 00:21:35.285 "base_bdevs_list": [ 00:21:35.285 { 00:21:35.285 "name": "BaseBdev1", 00:21:35.285 "uuid": "5d481b52-5fa2-5d3a-bc67-15eb314ae109", 00:21:35.285 "is_configured": true, 00:21:35.285 "data_offset": 2048, 00:21:35.285 "data_size": 63488 00:21:35.285 }, 00:21:35.285 { 00:21:35.285 "name": "BaseBdev2", 00:21:35.285 "uuid": "3da8466d-9a21-5ba1-ba31-4a714c537fb7", 00:21:35.285 "is_configured": true, 00:21:35.285 "data_offset": 2048, 00:21:35.285 "data_size": 63488 00:21:35.285 }, 00:21:35.285 { 00:21:35.285 "name": "BaseBdev3", 00:21:35.285 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:35.285 "is_configured": true, 00:21:35.285 "data_offset": 2048, 00:21:35.285 "data_size": 63488 00:21:35.285 }, 00:21:35.285 { 00:21:35.285 "name": "BaseBdev4", 00:21:35.286 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:35.286 "is_configured": true, 00:21:35.286 "data_offset": 2048, 00:21:35.286 "data_size": 63488 00:21:35.286 } 00:21:35.286 ] 00:21:35.286 }' 00:21:35.286 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.286 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.852 [2024-11-06 09:14:34.652784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.852 [2024-11-06 09:14:34.752290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.852 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.853 "name": "raid_bdev1", 00:21:35.853 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:35.853 "strip_size_kb": 0, 00:21:35.853 "state": "online", 00:21:35.853 "raid_level": "raid1", 00:21:35.853 "superblock": true, 00:21:35.853 "num_base_bdevs": 4, 00:21:35.853 "num_base_bdevs_discovered": 3, 00:21:35.853 "num_base_bdevs_operational": 3, 00:21:35.853 "base_bdevs_list": [ 00:21:35.853 { 00:21:35.853 "name": null, 00:21:35.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.853 "is_configured": false, 00:21:35.853 "data_offset": 0, 00:21:35.853 "data_size": 63488 00:21:35.853 }, 00:21:35.853 { 00:21:35.853 "name": "BaseBdev2", 00:21:35.853 "uuid": "3da8466d-9a21-5ba1-ba31-4a714c537fb7", 00:21:35.853 "is_configured": true, 00:21:35.853 "data_offset": 2048, 00:21:35.853 "data_size": 63488 00:21:35.853 }, 00:21:35.853 { 00:21:35.853 "name": "BaseBdev3", 00:21:35.853 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:35.853 "is_configured": true, 00:21:35.853 "data_offset": 2048, 00:21:35.853 "data_size": 63488 00:21:35.853 }, 00:21:35.853 { 00:21:35.853 "name": "BaseBdev4", 00:21:35.853 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:35.853 "is_configured": true, 00:21:35.853 "data_offset": 2048, 00:21:35.853 "data_size": 63488 00:21:35.853 } 00:21:35.853 ] 00:21:35.853 }' 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.853 09:14:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.853 [2024-11-06 09:14:34.848148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:35.853 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:35.853 Zero copy mechanism will not be used. 00:21:35.853 Running I/O for 60 seconds... 00:21:36.111 09:14:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:36.111 09:14:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.111 09:14:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.111 [2024-11-06 09:14:35.118512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.111 09:14:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.111 09:14:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:36.370 [2024-11-06 09:14:35.160841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:36.370 [2024-11-06 09:14:35.163138] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:36.370 [2024-11-06 09:14:35.286017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:36.370 [2024-11-06 09:14:35.286570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:36.629 [2024-11-06 09:14:35.411102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:36.629 [2024-11-06 09:14:35.411847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:36.888 [2024-11-06 09:14:35.760713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:36.888 156.00 IOPS, 468.00 MiB/s [2024-11-06T09:14:35.928Z] [2024-11-06 09:14:35.877585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:36.888 [2024-11-06 09:14:35.877906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:37.147 [2024-11-06 09:14:36.106546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.147 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.406 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.406 "name": "raid_bdev1", 00:21:37.406 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:37.406 "strip_size_kb": 0, 00:21:37.406 "state": "online", 00:21:37.406 "raid_level": "raid1", 00:21:37.406 "superblock": true, 00:21:37.406 "num_base_bdevs": 4, 00:21:37.406 "num_base_bdevs_discovered": 4, 00:21:37.406 "num_base_bdevs_operational": 4, 00:21:37.406 "process": { 00:21:37.406 "type": "rebuild", 00:21:37.406 "target": "spare", 00:21:37.406 "progress": { 00:21:37.406 "blocks": 14336, 00:21:37.406 "percent": 22 00:21:37.406 } 00:21:37.406 }, 00:21:37.406 "base_bdevs_list": [ 00:21:37.406 { 00:21:37.406 "name": "spare", 00:21:37.406 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:37.406 "is_configured": true, 00:21:37.406 "data_offset": 2048, 00:21:37.406 "data_size": 63488 00:21:37.406 }, 00:21:37.406 { 00:21:37.406 "name": "BaseBdev2", 00:21:37.406 "uuid": "3da8466d-9a21-5ba1-ba31-4a714c537fb7", 00:21:37.406 "is_configured": true, 00:21:37.406 "data_offset": 2048, 00:21:37.406 "data_size": 63488 00:21:37.406 }, 00:21:37.406 { 00:21:37.406 "name": "BaseBdev3", 00:21:37.406 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:37.406 "is_configured": true, 00:21:37.406 "data_offset": 2048, 00:21:37.406 "data_size": 63488 00:21:37.406 }, 00:21:37.406 { 00:21:37.406 "name": "BaseBdev4", 00:21:37.406 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:37.406 "is_configured": true, 00:21:37.406 "data_offset": 2048, 00:21:37.406 "data_size": 63488 00:21:37.406 } 00:21:37.406 ] 00:21:37.406 }' 00:21:37.406 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.407 [2024-11-06 09:14:36.285158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.407 [2024-11-06 09:14:36.330471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:37.407 [2024-11-06 09:14:36.353523] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:37.407 [2024-11-06 09:14:36.362492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.407 [2024-11-06 09:14:36.362535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.407 [2024-11-06 09:14:36.362548] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:37.407 [2024-11-06 09:14:36.404526] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.407 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.665 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.665 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.665 "name": "raid_bdev1", 00:21:37.665 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:37.665 "strip_size_kb": 0, 00:21:37.665 "state": "online", 00:21:37.665 "raid_level": "raid1", 00:21:37.665 "superblock": true, 00:21:37.665 "num_base_bdevs": 4, 00:21:37.665 "num_base_bdevs_discovered": 3, 00:21:37.665 "num_base_bdevs_operational": 3, 00:21:37.665 "base_bdevs_list": [ 00:21:37.665 { 00:21:37.665 "name": null, 00:21:37.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.665 "is_configured": false, 00:21:37.665 "data_offset": 0, 00:21:37.665 "data_size": 63488 00:21:37.665 }, 00:21:37.665 { 00:21:37.665 "name": "BaseBdev2", 00:21:37.665 "uuid": "3da8466d-9a21-5ba1-ba31-4a714c537fb7", 00:21:37.665 "is_configured": true, 00:21:37.665 "data_offset": 2048, 00:21:37.665 "data_size": 63488 00:21:37.665 }, 00:21:37.665 { 00:21:37.665 "name": "BaseBdev3", 00:21:37.665 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:37.665 "is_configured": true, 00:21:37.665 "data_offset": 2048, 00:21:37.665 "data_size": 63488 00:21:37.665 }, 00:21:37.665 { 00:21:37.665 "name": "BaseBdev4", 00:21:37.665 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:37.665 "is_configured": true, 00:21:37.665 "data_offset": 2048, 00:21:37.665 "data_size": 63488 00:21:37.665 } 00:21:37.665 ] 00:21:37.665 }' 00:21:37.665 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.665 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.924 "name": "raid_bdev1", 00:21:37.924 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:37.924 "strip_size_kb": 0, 00:21:37.924 "state": "online", 00:21:37.924 "raid_level": "raid1", 00:21:37.924 "superblock": true, 00:21:37.924 "num_base_bdevs": 4, 00:21:37.924 "num_base_bdevs_discovered": 3, 00:21:37.924 "num_base_bdevs_operational": 3, 00:21:37.924 "base_bdevs_list": [ 00:21:37.924 { 00:21:37.924 "name": null, 00:21:37.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.924 "is_configured": false, 00:21:37.924 "data_offset": 0, 00:21:37.924 "data_size": 63488 00:21:37.924 }, 00:21:37.924 { 00:21:37.924 "name": "BaseBdev2", 00:21:37.924 "uuid": "3da8466d-9a21-5ba1-ba31-4a714c537fb7", 00:21:37.924 "is_configured": true, 00:21:37.924 "data_offset": 2048, 00:21:37.924 "data_size": 63488 00:21:37.924 }, 00:21:37.924 { 00:21:37.924 "name": "BaseBdev3", 00:21:37.924 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:37.924 "is_configured": true, 00:21:37.924 "data_offset": 2048, 00:21:37.924 "data_size": 63488 00:21:37.924 }, 00:21:37.924 { 00:21:37.924 "name": "BaseBdev4", 00:21:37.924 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:37.924 "is_configured": true, 00:21:37.924 "data_offset": 2048, 00:21:37.924 "data_size": 63488 00:21:37.924 } 00:21:37.924 ] 00:21:37.924 }' 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.924 161.50 IOPS, 484.50 MiB/s [2024-11-06T09:14:36.964Z] 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.924 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:37.925 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:37.925 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.925 09:14:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.183 [2024-11-06 09:14:36.970822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:38.183 09:14:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.183 09:14:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:38.183 [2024-11-06 09:14:37.033470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:38.183 [2024-11-06 09:14:37.035863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:38.183 [2024-11-06 09:14:37.152136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:38.183 [2024-11-06 09:14:37.152740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:38.442 [2024-11-06 09:14:37.364398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:38.442 [2024-11-06 09:14:37.364715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:38.701 [2024-11-06 09:14:37.593406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:38.960 [2024-11-06 09:14:37.813229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:38.960 [2024-11-06 09:14:37.813913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:39.220 152.33 IOPS, 457.00 MiB/s [2024-11-06T09:14:38.260Z] 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.220 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.220 "name": "raid_bdev1", 00:21:39.220 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:39.220 "strip_size_kb": 0, 00:21:39.220 "state": "online", 00:21:39.221 "raid_level": "raid1", 00:21:39.221 "superblock": true, 00:21:39.221 "num_base_bdevs": 4, 00:21:39.221 "num_base_bdevs_discovered": 4, 00:21:39.221 "num_base_bdevs_operational": 4, 00:21:39.221 "process": { 00:21:39.221 "type": "rebuild", 00:21:39.221 "target": "spare", 00:21:39.221 "progress": { 00:21:39.221 "blocks": 12288, 00:21:39.221 "percent": 19 00:21:39.221 } 00:21:39.221 }, 00:21:39.221 "base_bdevs_list": [ 00:21:39.221 { 00:21:39.221 "name": "spare", 00:21:39.221 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:39.221 "is_configured": true, 00:21:39.221 "data_offset": 2048, 00:21:39.221 "data_size": 63488 00:21:39.221 }, 00:21:39.221 { 00:21:39.221 "name": "BaseBdev2", 00:21:39.221 "uuid": "3da8466d-9a21-5ba1-ba31-4a714c537fb7", 00:21:39.221 "is_configured": true, 00:21:39.221 "data_offset": 2048, 00:21:39.221 "data_size": 63488 00:21:39.221 }, 00:21:39.221 { 00:21:39.221 "name": "BaseBdev3", 00:21:39.221 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:39.221 "is_configured": true, 00:21:39.221 "data_offset": 2048, 00:21:39.221 "data_size": 63488 00:21:39.221 }, 00:21:39.221 { 00:21:39.221 "name": "BaseBdev4", 00:21:39.221 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:39.221 "is_configured": true, 00:21:39.221 "data_offset": 2048, 00:21:39.221 "data_size": 63488 00:21:39.221 } 00:21:39.221 ] 00:21:39.221 }' 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.221 [2024-11-06 09:14:38.152863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:39.221 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.221 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.221 [2024-11-06 09:14:38.177788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:39.481 [2024-11-06 09:14:38.286206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:39.481 [2024-11-06 09:14:38.392970] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:39.481 [2024-11-06 09:14:38.393166] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:39.481 [2024-11-06 09:14:38.396110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.481 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.481 "name": "raid_bdev1", 00:21:39.481 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:39.481 "strip_size_kb": 0, 00:21:39.481 "state": "online", 00:21:39.481 "raid_level": "raid1", 00:21:39.481 "superblock": true, 00:21:39.481 "num_base_bdevs": 4, 00:21:39.481 "num_base_bdevs_discovered": 3, 00:21:39.481 "num_base_bdevs_operational": 3, 00:21:39.481 "process": { 00:21:39.481 "type": "rebuild", 00:21:39.481 "target": "spare", 00:21:39.481 "progress": { 00:21:39.481 "blocks": 16384, 00:21:39.481 "percent": 25 00:21:39.481 } 00:21:39.481 }, 00:21:39.481 "base_bdevs_list": [ 00:21:39.481 { 00:21:39.481 "name": "spare", 00:21:39.481 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:39.481 "is_configured": true, 00:21:39.481 "data_offset": 2048, 00:21:39.481 "data_size": 63488 00:21:39.481 }, 00:21:39.481 { 00:21:39.481 "name": null, 00:21:39.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.481 "is_configured": false, 00:21:39.481 "data_offset": 0, 00:21:39.481 "data_size": 63488 00:21:39.481 }, 00:21:39.481 { 00:21:39.481 "name": "BaseBdev3", 00:21:39.481 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:39.481 "is_configured": true, 00:21:39.482 "data_offset": 2048, 00:21:39.482 "data_size": 63488 00:21:39.482 }, 00:21:39.482 { 00:21:39.482 "name": "BaseBdev4", 00:21:39.482 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:39.482 "is_configured": true, 00:21:39.482 "data_offset": 2048, 00:21:39.482 "data_size": 63488 00:21:39.482 } 00:21:39.482 ] 00:21:39.482 }' 00:21:39.482 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.482 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:39.482 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=493 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.741 "name": "raid_bdev1", 00:21:39.741 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:39.741 "strip_size_kb": 0, 00:21:39.741 "state": "online", 00:21:39.741 "raid_level": "raid1", 00:21:39.741 "superblock": true, 00:21:39.741 "num_base_bdevs": 4, 00:21:39.741 "num_base_bdevs_discovered": 3, 00:21:39.741 "num_base_bdevs_operational": 3, 00:21:39.741 "process": { 00:21:39.741 "type": "rebuild", 00:21:39.741 "target": "spare", 00:21:39.741 "progress": { 00:21:39.741 "blocks": 18432, 00:21:39.741 "percent": 29 00:21:39.741 } 00:21:39.741 }, 00:21:39.741 "base_bdevs_list": [ 00:21:39.741 { 00:21:39.741 "name": "spare", 00:21:39.741 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:39.741 "is_configured": true, 00:21:39.741 "data_offset": 2048, 00:21:39.741 "data_size": 63488 00:21:39.741 }, 00:21:39.741 { 00:21:39.741 "name": null, 00:21:39.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.741 "is_configured": false, 00:21:39.741 "data_offset": 0, 00:21:39.741 "data_size": 63488 00:21:39.741 }, 00:21:39.741 { 00:21:39.741 "name": "BaseBdev3", 00:21:39.741 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:39.741 "is_configured": true, 00:21:39.741 "data_offset": 2048, 00:21:39.741 "data_size": 63488 00:21:39.741 }, 00:21:39.741 { 00:21:39.741 "name": "BaseBdev4", 00:21:39.741 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:39.741 "is_configured": true, 00:21:39.741 "data_offset": 2048, 00:21:39.741 "data_size": 63488 00:21:39.741 } 00:21:39.741 ] 00:21:39.741 }' 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.741 09:14:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:39.741 [2024-11-06 09:14:38.776370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:40.567 134.50 IOPS, 403.50 MiB/s [2024-11-06T09:14:39.607Z] [2024-11-06 09:14:39.466830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:40.567 [2024-11-06 09:14:39.467851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:40.827 "name": "raid_bdev1", 00:21:40.827 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:40.827 "strip_size_kb": 0, 00:21:40.827 "state": "online", 00:21:40.827 "raid_level": "raid1", 00:21:40.827 "superblock": true, 00:21:40.827 "num_base_bdevs": 4, 00:21:40.827 "num_base_bdevs_discovered": 3, 00:21:40.827 "num_base_bdevs_operational": 3, 00:21:40.827 "process": { 00:21:40.827 "type": "rebuild", 00:21:40.827 "target": "spare", 00:21:40.827 "progress": { 00:21:40.827 "blocks": 32768, 00:21:40.827 "percent": 51 00:21:40.827 } 00:21:40.827 }, 00:21:40.827 "base_bdevs_list": [ 00:21:40.827 { 00:21:40.827 "name": "spare", 00:21:40.827 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:40.827 "is_configured": true, 00:21:40.827 "data_offset": 2048, 00:21:40.827 "data_size": 63488 00:21:40.827 }, 00:21:40.827 { 00:21:40.827 "name": null, 00:21:40.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.827 "is_configured": false, 00:21:40.827 "data_offset": 0, 00:21:40.827 "data_size": 63488 00:21:40.827 }, 00:21:40.827 { 00:21:40.827 "name": "BaseBdev3", 00:21:40.827 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:40.827 "is_configured": true, 00:21:40.827 "data_offset": 2048, 00:21:40.827 "data_size": 63488 00:21:40.827 }, 00:21:40.827 { 00:21:40.827 "name": "BaseBdev4", 00:21:40.827 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:40.827 "is_configured": true, 00:21:40.827 "data_offset": 2048, 00:21:40.827 "data_size": 63488 00:21:40.827 } 00:21:40.827 ] 00:21:40.827 }' 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:40.827 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.828 09:14:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:41.086 115.80 IOPS, 347.40 MiB/s [2024-11-06T09:14:40.126Z] [2024-11-06 09:14:39.927784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:41.345 [2024-11-06 09:14:40.157883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.913 "name": "raid_bdev1", 00:21:41.913 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:41.913 "strip_size_kb": 0, 00:21:41.913 "state": "online", 00:21:41.913 "raid_level": "raid1", 00:21:41.913 "superblock": true, 00:21:41.913 "num_base_bdevs": 4, 00:21:41.913 "num_base_bdevs_discovered": 3, 00:21:41.913 "num_base_bdevs_operational": 3, 00:21:41.913 "process": { 00:21:41.913 "type": "rebuild", 00:21:41.913 "target": "spare", 00:21:41.913 "progress": { 00:21:41.913 "blocks": 51200, 00:21:41.913 "percent": 80 00:21:41.913 } 00:21:41.913 }, 00:21:41.913 "base_bdevs_list": [ 00:21:41.913 { 00:21:41.913 "name": "spare", 00:21:41.913 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:41.913 "is_configured": true, 00:21:41.913 "data_offset": 2048, 00:21:41.913 "data_size": 63488 00:21:41.913 }, 00:21:41.913 { 00:21:41.913 "name": null, 00:21:41.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.913 "is_configured": false, 00:21:41.913 "data_offset": 0, 00:21:41.913 "data_size": 63488 00:21:41.913 }, 00:21:41.913 { 00:21:41.913 "name": "BaseBdev3", 00:21:41.913 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:41.913 "is_configured": true, 00:21:41.913 "data_offset": 2048, 00:21:41.913 "data_size": 63488 00:21:41.913 }, 00:21:41.913 { 00:21:41.913 "name": "BaseBdev4", 00:21:41.913 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:41.913 "is_configured": true, 00:21:41.913 "data_offset": 2048, 00:21:41.913 "data_size": 63488 00:21:41.913 } 00:21:41.913 ] 00:21:41.913 }' 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.913 104.67 IOPS, 314.00 MiB/s [2024-11-06T09:14:40.953Z] [2024-11-06 09:14:40.879094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.913 09:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:42.848 [2024-11-06 09:14:41.522417] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:42.848 [2024-11-06 09:14:41.622253] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:42.848 [2024-11-06 09:14:41.625169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.108 95.14 IOPS, 285.43 MiB/s [2024-11-06T09:14:42.148Z] 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.108 "name": "raid_bdev1", 00:21:43.108 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:43.108 "strip_size_kb": 0, 00:21:43.108 "state": "online", 00:21:43.108 "raid_level": "raid1", 00:21:43.108 "superblock": true, 00:21:43.108 "num_base_bdevs": 4, 00:21:43.108 "num_base_bdevs_discovered": 3, 00:21:43.108 "num_base_bdevs_operational": 3, 00:21:43.108 "base_bdevs_list": [ 00:21:43.108 { 00:21:43.108 "name": "spare", 00:21:43.108 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:43.108 "is_configured": true, 00:21:43.108 "data_offset": 2048, 00:21:43.108 "data_size": 63488 00:21:43.108 }, 00:21:43.108 { 00:21:43.108 "name": null, 00:21:43.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.108 "is_configured": false, 00:21:43.108 "data_offset": 0, 00:21:43.108 "data_size": 63488 00:21:43.108 }, 00:21:43.108 { 00:21:43.108 "name": "BaseBdev3", 00:21:43.108 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:43.108 "is_configured": true, 00:21:43.108 "data_offset": 2048, 00:21:43.108 "data_size": 63488 00:21:43.108 }, 00:21:43.108 { 00:21:43.108 "name": "BaseBdev4", 00:21:43.108 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:43.108 "is_configured": true, 00:21:43.108 "data_offset": 2048, 00:21:43.108 "data_size": 63488 00:21:43.108 } 00:21:43.108 ] 00:21:43.108 }' 00:21:43.108 09:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.108 "name": "raid_bdev1", 00:21:43.108 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:43.108 "strip_size_kb": 0, 00:21:43.108 "state": "online", 00:21:43.108 "raid_level": "raid1", 00:21:43.108 "superblock": true, 00:21:43.108 "num_base_bdevs": 4, 00:21:43.108 "num_base_bdevs_discovered": 3, 00:21:43.108 "num_base_bdevs_operational": 3, 00:21:43.108 "base_bdevs_list": [ 00:21:43.108 { 00:21:43.108 "name": "spare", 00:21:43.108 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:43.108 "is_configured": true, 00:21:43.108 "data_offset": 2048, 00:21:43.108 "data_size": 63488 00:21:43.108 }, 00:21:43.108 { 00:21:43.108 "name": null, 00:21:43.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.108 "is_configured": false, 00:21:43.108 "data_offset": 0, 00:21:43.108 "data_size": 63488 00:21:43.108 }, 00:21:43.108 { 00:21:43.108 "name": "BaseBdev3", 00:21:43.108 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:43.108 "is_configured": true, 00:21:43.108 "data_offset": 2048, 00:21:43.108 "data_size": 63488 00:21:43.108 }, 00:21:43.108 { 00:21:43.108 "name": "BaseBdev4", 00:21:43.108 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:43.108 "is_configured": true, 00:21:43.108 "data_offset": 2048, 00:21:43.108 "data_size": 63488 00:21:43.108 } 00:21:43.108 ] 00:21:43.108 }' 00:21:43.108 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.367 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.367 "name": "raid_bdev1", 00:21:43.367 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:43.367 "strip_size_kb": 0, 00:21:43.367 "state": "online", 00:21:43.367 "raid_level": "raid1", 00:21:43.367 "superblock": true, 00:21:43.367 "num_base_bdevs": 4, 00:21:43.367 "num_base_bdevs_discovered": 3, 00:21:43.367 "num_base_bdevs_operational": 3, 00:21:43.367 "base_bdevs_list": [ 00:21:43.367 { 00:21:43.367 "name": "spare", 00:21:43.367 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:43.367 "is_configured": true, 00:21:43.367 "data_offset": 2048, 00:21:43.367 "data_size": 63488 00:21:43.367 }, 00:21:43.367 { 00:21:43.367 "name": null, 00:21:43.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.367 "is_configured": false, 00:21:43.367 "data_offset": 0, 00:21:43.367 "data_size": 63488 00:21:43.367 }, 00:21:43.367 { 00:21:43.367 "name": "BaseBdev3", 00:21:43.367 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:43.367 "is_configured": true, 00:21:43.367 "data_offset": 2048, 00:21:43.367 "data_size": 63488 00:21:43.367 }, 00:21:43.367 { 00:21:43.367 "name": "BaseBdev4", 00:21:43.367 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:43.367 "is_configured": true, 00:21:43.367 "data_offset": 2048, 00:21:43.367 "data_size": 63488 00:21:43.368 } 00:21:43.368 ] 00:21:43.368 }' 00:21:43.368 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.368 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.627 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.627 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.627 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.627 [2024-11-06 09:14:42.637151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.627 [2024-11-06 09:14:42.637336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.885 00:21:43.885 Latency(us) 00:21:43.885 [2024-11-06T09:14:42.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.885 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:43.886 raid_bdev1 : 7.87 88.34 265.02 0.00 0.00 16431.70 302.68 112016.55 00:21:43.886 [2024-11-06T09:14:42.926Z] =================================================================================================================== 00:21:43.886 [2024-11-06T09:14:42.926Z] Total : 88.34 265.02 0.00 0.00 16431.70 302.68 112016.55 00:21:43.886 [2024-11-06 09:14:42.727251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.886 { 00:21:43.886 "results": [ 00:21:43.886 { 00:21:43.886 "job": "raid_bdev1", 00:21:43.886 "core_mask": "0x1", 00:21:43.886 "workload": "randrw", 00:21:43.886 "percentage": 50, 00:21:43.886 "status": "finished", 00:21:43.886 "queue_depth": 2, 00:21:43.886 "io_size": 3145728, 00:21:43.886 "runtime": 7.86724, 00:21:43.886 "iops": 88.34101921385391, 00:21:43.886 "mibps": 265.02305764156176, 00:21:43.886 "io_failed": 0, 00:21:43.886 "io_timeout": 0, 00:21:43.886 "avg_latency_us": 16431.697317037935, 00:21:43.886 "min_latency_us": 302.67630522088353, 00:21:43.886 "max_latency_us": 112016.55261044177 00:21:43.886 } 00:21:43.886 ], 00:21:43.886 "core_count": 1 00:21:43.886 } 00:21:43.886 [2024-11-06 09:14:42.727446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.886 [2024-11-06 09:14:42.727571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.886 [2024-11-06 09:14:42.727585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:43.886 09:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:44.145 /dev/nbd0 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:44.145 1+0 records in 00:21:44.145 1+0 records out 00:21:44.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118455 s, 3.5 MB/s 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.145 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:44.403 /dev/nbd1 00:21:44.403 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:44.403 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:44.404 1+0 records in 00:21:44.404 1+0 records out 00:21:44.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435882 s, 9.4 MB/s 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.404 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:44.662 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.921 09:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:45.181 /dev/nbd1 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:45.181 1+0 records in 00:21:45.181 1+0 records out 00:21:45.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286433 s, 14.3 MB/s 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.181 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.440 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 [2024-11-06 09:14:44.619410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:45.700 [2024-11-06 09:14:44.619473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.700 [2024-11-06 09:14:44.619499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:45.700 [2024-11-06 09:14:44.619519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.700 [2024-11-06 09:14:44.622008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.700 [2024-11-06 09:14:44.622180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:45.700 [2024-11-06 09:14:44.622305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:45.700 [2024-11-06 09:14:44.622366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.700 [2024-11-06 09:14:44.622519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:45.700 [2024-11-06 09:14:44.622607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:45.700 spare 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 [2024-11-06 09:14:44.722550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:45.700 [2024-11-06 09:14:44.722595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:45.700 [2024-11-06 09:14:44.722963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:21:45.700 [2024-11-06 09:14:44.723146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:45.700 [2024-11-06 09:14:44.723160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:45.700 [2024-11-06 09:14:44.723410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.700 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.959 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.959 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.959 "name": "raid_bdev1", 00:21:45.959 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:45.959 "strip_size_kb": 0, 00:21:45.959 "state": "online", 00:21:45.959 "raid_level": "raid1", 00:21:45.959 "superblock": true, 00:21:45.959 "num_base_bdevs": 4, 00:21:45.959 "num_base_bdevs_discovered": 3, 00:21:45.959 "num_base_bdevs_operational": 3, 00:21:45.959 "base_bdevs_list": [ 00:21:45.959 { 00:21:45.959 "name": "spare", 00:21:45.959 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:45.959 "is_configured": true, 00:21:45.959 "data_offset": 2048, 00:21:45.959 "data_size": 63488 00:21:45.959 }, 00:21:45.959 { 00:21:45.959 "name": null, 00:21:45.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.959 "is_configured": false, 00:21:45.959 "data_offset": 2048, 00:21:45.959 "data_size": 63488 00:21:45.959 }, 00:21:45.959 { 00:21:45.959 "name": "BaseBdev3", 00:21:45.959 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:45.959 "is_configured": true, 00:21:45.959 "data_offset": 2048, 00:21:45.959 "data_size": 63488 00:21:45.959 }, 00:21:45.959 { 00:21:45.959 "name": "BaseBdev4", 00:21:45.959 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:45.959 "is_configured": true, 00:21:45.959 "data_offset": 2048, 00:21:45.959 "data_size": 63488 00:21:45.959 } 00:21:45.959 ] 00:21:45.959 }' 00:21:45.959 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.959 09:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.218 "name": "raid_bdev1", 00:21:46.218 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:46.218 "strip_size_kb": 0, 00:21:46.218 "state": "online", 00:21:46.218 "raid_level": "raid1", 00:21:46.218 "superblock": true, 00:21:46.218 "num_base_bdevs": 4, 00:21:46.218 "num_base_bdevs_discovered": 3, 00:21:46.218 "num_base_bdevs_operational": 3, 00:21:46.218 "base_bdevs_list": [ 00:21:46.218 { 00:21:46.218 "name": "spare", 00:21:46.218 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:46.218 "is_configured": true, 00:21:46.218 "data_offset": 2048, 00:21:46.218 "data_size": 63488 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "name": null, 00:21:46.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.218 "is_configured": false, 00:21:46.218 "data_offset": 2048, 00:21:46.218 "data_size": 63488 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "name": "BaseBdev3", 00:21:46.218 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:46.218 "is_configured": true, 00:21:46.218 "data_offset": 2048, 00:21:46.218 "data_size": 63488 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "name": "BaseBdev4", 00:21:46.218 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:46.218 "is_configured": true, 00:21:46.218 "data_offset": 2048, 00:21:46.218 "data_size": 63488 00:21:46.218 } 00:21:46.218 ] 00:21:46.218 }' 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.218 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.478 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.478 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.478 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:46.478 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.478 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.478 [2024-11-06 09:14:45.286797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.478 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.479 "name": "raid_bdev1", 00:21:46.479 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:46.479 "strip_size_kb": 0, 00:21:46.479 "state": "online", 00:21:46.479 "raid_level": "raid1", 00:21:46.479 "superblock": true, 00:21:46.479 "num_base_bdevs": 4, 00:21:46.479 "num_base_bdevs_discovered": 2, 00:21:46.479 "num_base_bdevs_operational": 2, 00:21:46.479 "base_bdevs_list": [ 00:21:46.479 { 00:21:46.479 "name": null, 00:21:46.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.479 "is_configured": false, 00:21:46.479 "data_offset": 0, 00:21:46.479 "data_size": 63488 00:21:46.479 }, 00:21:46.479 { 00:21:46.479 "name": null, 00:21:46.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.479 "is_configured": false, 00:21:46.479 "data_offset": 2048, 00:21:46.479 "data_size": 63488 00:21:46.479 }, 00:21:46.479 { 00:21:46.479 "name": "BaseBdev3", 00:21:46.479 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:46.479 "is_configured": true, 00:21:46.479 "data_offset": 2048, 00:21:46.479 "data_size": 63488 00:21:46.479 }, 00:21:46.479 { 00:21:46.479 "name": "BaseBdev4", 00:21:46.479 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:46.479 "is_configured": true, 00:21:46.479 "data_offset": 2048, 00:21:46.479 "data_size": 63488 00:21:46.479 } 00:21:46.479 ] 00:21:46.479 }' 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.479 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.742 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.742 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.742 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.742 [2024-11-06 09:14:45.646381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.742 [2024-11-06 09:14:45.646588] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:46.742 [2024-11-06 09:14:45.646605] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:46.742 [2024-11-06 09:14:45.646652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.742 [2024-11-06 09:14:45.661218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:21:46.742 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.742 09:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:46.742 [2024-11-06 09:14:45.663443] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:47.677 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.936 "name": "raid_bdev1", 00:21:47.936 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:47.936 "strip_size_kb": 0, 00:21:47.936 "state": "online", 00:21:47.936 "raid_level": "raid1", 00:21:47.936 "superblock": true, 00:21:47.936 "num_base_bdevs": 4, 00:21:47.936 "num_base_bdevs_discovered": 3, 00:21:47.936 "num_base_bdevs_operational": 3, 00:21:47.936 "process": { 00:21:47.936 "type": "rebuild", 00:21:47.936 "target": "spare", 00:21:47.936 "progress": { 00:21:47.936 "blocks": 20480, 00:21:47.936 "percent": 32 00:21:47.936 } 00:21:47.936 }, 00:21:47.936 "base_bdevs_list": [ 00:21:47.936 { 00:21:47.936 "name": "spare", 00:21:47.936 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:47.936 "is_configured": true, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 }, 00:21:47.936 { 00:21:47.936 "name": null, 00:21:47.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.936 "is_configured": false, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 }, 00:21:47.936 { 00:21:47.936 "name": "BaseBdev3", 00:21:47.936 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:47.936 "is_configured": true, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 }, 00:21:47.936 { 00:21:47.936 "name": "BaseBdev4", 00:21:47.936 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:47.936 "is_configured": true, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 } 00:21:47.936 ] 00:21:47.936 }' 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:47.936 [2024-11-06 09:14:46.803617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.936 [2024-11-06 09:14:46.869160] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:47.936 [2024-11-06 09:14:46.869420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.936 [2024-11-06 09:14:46.869535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.936 [2024-11-06 09:14:46.869576] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.936 "name": "raid_bdev1", 00:21:47.936 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:47.936 "strip_size_kb": 0, 00:21:47.936 "state": "online", 00:21:47.936 "raid_level": "raid1", 00:21:47.936 "superblock": true, 00:21:47.936 "num_base_bdevs": 4, 00:21:47.936 "num_base_bdevs_discovered": 2, 00:21:47.936 "num_base_bdevs_operational": 2, 00:21:47.936 "base_bdevs_list": [ 00:21:47.936 { 00:21:47.936 "name": null, 00:21:47.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.936 "is_configured": false, 00:21:47.936 "data_offset": 0, 00:21:47.936 "data_size": 63488 00:21:47.936 }, 00:21:47.936 { 00:21:47.936 "name": null, 00:21:47.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.936 "is_configured": false, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 }, 00:21:47.936 { 00:21:47.936 "name": "BaseBdev3", 00:21:47.936 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:47.936 "is_configured": true, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 }, 00:21:47.936 { 00:21:47.936 "name": "BaseBdev4", 00:21:47.936 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:47.936 "is_configured": true, 00:21:47.936 "data_offset": 2048, 00:21:47.936 "data_size": 63488 00:21:47.936 } 00:21:47.936 ] 00:21:47.936 }' 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.936 09:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 09:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:48.502 09:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.502 09:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 [2024-11-06 09:14:47.315259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.502 [2024-11-06 09:14:47.315341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.502 [2024-11-06 09:14:47.315369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:48.502 [2024-11-06 09:14:47.315381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.502 [2024-11-06 09:14:47.315873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.502 [2024-11-06 09:14:47.315894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.502 [2024-11-06 09:14:47.315992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:48.502 [2024-11-06 09:14:47.316006] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:48.502 [2024-11-06 09:14:47.316020] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:48.502 [2024-11-06 09:14:47.316042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.502 [2024-11-06 09:14:47.331424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:21:48.502 spare 00:21:48.502 09:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.502 09:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:48.502 [2024-11-06 09:14:47.333538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:49.437 "name": "raid_bdev1", 00:21:49.437 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:49.437 "strip_size_kb": 0, 00:21:49.437 "state": "online", 00:21:49.437 "raid_level": "raid1", 00:21:49.437 "superblock": true, 00:21:49.437 "num_base_bdevs": 4, 00:21:49.437 "num_base_bdevs_discovered": 3, 00:21:49.437 "num_base_bdevs_operational": 3, 00:21:49.437 "process": { 00:21:49.437 "type": "rebuild", 00:21:49.437 "target": "spare", 00:21:49.437 "progress": { 00:21:49.437 "blocks": 20480, 00:21:49.437 "percent": 32 00:21:49.437 } 00:21:49.437 }, 00:21:49.437 "base_bdevs_list": [ 00:21:49.437 { 00:21:49.437 "name": "spare", 00:21:49.437 "uuid": "16d5dfdc-5b29-5315-a9d7-1bd0f568a7e4", 00:21:49.437 "is_configured": true, 00:21:49.437 "data_offset": 2048, 00:21:49.437 "data_size": 63488 00:21:49.437 }, 00:21:49.437 { 00:21:49.437 "name": null, 00:21:49.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.437 "is_configured": false, 00:21:49.437 "data_offset": 2048, 00:21:49.437 "data_size": 63488 00:21:49.437 }, 00:21:49.437 { 00:21:49.437 "name": "BaseBdev3", 00:21:49.437 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:49.437 "is_configured": true, 00:21:49.437 "data_offset": 2048, 00:21:49.437 "data_size": 63488 00:21:49.437 }, 00:21:49.437 { 00:21:49.437 "name": "BaseBdev4", 00:21:49.437 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:49.437 "is_configured": true, 00:21:49.437 "data_offset": 2048, 00:21:49.437 "data_size": 63488 00:21:49.438 } 00:21:49.438 ] 00:21:49.438 }' 00:21:49.438 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:49.438 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.438 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:49.438 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.697 [2024-11-06 09:14:48.481619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:49.697 [2024-11-06 09:14:48.539176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:49.697 [2024-11-06 09:14:48.539321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.697 [2024-11-06 09:14:48.539344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:49.697 [2024-11-06 09:14:48.539356] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.697 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.697 "name": "raid_bdev1", 00:21:49.697 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:49.697 "strip_size_kb": 0, 00:21:49.697 "state": "online", 00:21:49.697 "raid_level": "raid1", 00:21:49.697 "superblock": true, 00:21:49.697 "num_base_bdevs": 4, 00:21:49.697 "num_base_bdevs_discovered": 2, 00:21:49.698 "num_base_bdevs_operational": 2, 00:21:49.698 "base_bdevs_list": [ 00:21:49.698 { 00:21:49.698 "name": null, 00:21:49.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.698 "is_configured": false, 00:21:49.698 "data_offset": 0, 00:21:49.698 "data_size": 63488 00:21:49.698 }, 00:21:49.698 { 00:21:49.698 "name": null, 00:21:49.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.698 "is_configured": false, 00:21:49.698 "data_offset": 2048, 00:21:49.698 "data_size": 63488 00:21:49.698 }, 00:21:49.698 { 00:21:49.698 "name": "BaseBdev3", 00:21:49.698 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:49.698 "is_configured": true, 00:21:49.698 "data_offset": 2048, 00:21:49.698 "data_size": 63488 00:21:49.698 }, 00:21:49.698 { 00:21:49.698 "name": "BaseBdev4", 00:21:49.698 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:49.698 "is_configured": true, 00:21:49.698 "data_offset": 2048, 00:21:49.698 "data_size": 63488 00:21:49.698 } 00:21:49.698 ] 00:21:49.698 }' 00:21:49.698 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.698 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 09:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.217 "name": "raid_bdev1", 00:21:50.217 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:50.217 "strip_size_kb": 0, 00:21:50.217 "state": "online", 00:21:50.217 "raid_level": "raid1", 00:21:50.217 "superblock": true, 00:21:50.217 "num_base_bdevs": 4, 00:21:50.217 "num_base_bdevs_discovered": 2, 00:21:50.217 "num_base_bdevs_operational": 2, 00:21:50.217 "base_bdevs_list": [ 00:21:50.217 { 00:21:50.217 "name": null, 00:21:50.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.217 "is_configured": false, 00:21:50.217 "data_offset": 0, 00:21:50.217 "data_size": 63488 00:21:50.217 }, 00:21:50.217 { 00:21:50.217 "name": null, 00:21:50.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.217 "is_configured": false, 00:21:50.217 "data_offset": 2048, 00:21:50.217 "data_size": 63488 00:21:50.217 }, 00:21:50.217 { 00:21:50.217 "name": "BaseBdev3", 00:21:50.217 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:50.217 "is_configured": true, 00:21:50.217 "data_offset": 2048, 00:21:50.217 "data_size": 63488 00:21:50.217 }, 00:21:50.217 { 00:21:50.217 "name": "BaseBdev4", 00:21:50.217 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:50.217 "is_configured": true, 00:21:50.217 "data_offset": 2048, 00:21:50.217 "data_size": 63488 00:21:50.217 } 00:21:50.217 ] 00:21:50.217 }' 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.217 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.217 [2024-11-06 09:14:49.100365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:50.217 [2024-11-06 09:14:49.100431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.217 [2024-11-06 09:14:49.100454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:50.217 [2024-11-06 09:14:49.100468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.217 [2024-11-06 09:14:49.100920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.217 [2024-11-06 09:14:49.100955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:50.217 [2024-11-06 09:14:49.101045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:50.217 [2024-11-06 09:14:49.101065] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:50.218 [2024-11-06 09:14:49.101075] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:50.218 [2024-11-06 09:14:49.101091] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:50.218 BaseBdev1 00:21:50.218 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.218 09:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:51.157 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:51.157 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.157 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.158 "name": "raid_bdev1", 00:21:51.158 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:51.158 "strip_size_kb": 0, 00:21:51.158 "state": "online", 00:21:51.158 "raid_level": "raid1", 00:21:51.158 "superblock": true, 00:21:51.158 "num_base_bdevs": 4, 00:21:51.158 "num_base_bdevs_discovered": 2, 00:21:51.158 "num_base_bdevs_operational": 2, 00:21:51.158 "base_bdevs_list": [ 00:21:51.158 { 00:21:51.158 "name": null, 00:21:51.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.158 "is_configured": false, 00:21:51.158 "data_offset": 0, 00:21:51.158 "data_size": 63488 00:21:51.158 }, 00:21:51.158 { 00:21:51.158 "name": null, 00:21:51.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.158 "is_configured": false, 00:21:51.158 "data_offset": 2048, 00:21:51.158 "data_size": 63488 00:21:51.158 }, 00:21:51.158 { 00:21:51.158 "name": "BaseBdev3", 00:21:51.158 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:51.158 "is_configured": true, 00:21:51.158 "data_offset": 2048, 00:21:51.158 "data_size": 63488 00:21:51.158 }, 00:21:51.158 { 00:21:51.158 "name": "BaseBdev4", 00:21:51.158 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:51.158 "is_configured": true, 00:21:51.158 "data_offset": 2048, 00:21:51.158 "data_size": 63488 00:21:51.158 } 00:21:51.158 ] 00:21:51.158 }' 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.158 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.725 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.725 "name": "raid_bdev1", 00:21:51.725 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:51.725 "strip_size_kb": 0, 00:21:51.725 "state": "online", 00:21:51.725 "raid_level": "raid1", 00:21:51.725 "superblock": true, 00:21:51.725 "num_base_bdevs": 4, 00:21:51.725 "num_base_bdevs_discovered": 2, 00:21:51.725 "num_base_bdevs_operational": 2, 00:21:51.725 "base_bdevs_list": [ 00:21:51.725 { 00:21:51.725 "name": null, 00:21:51.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.725 "is_configured": false, 00:21:51.725 "data_offset": 0, 00:21:51.725 "data_size": 63488 00:21:51.725 }, 00:21:51.725 { 00:21:51.725 "name": null, 00:21:51.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.725 "is_configured": false, 00:21:51.725 "data_offset": 2048, 00:21:51.725 "data_size": 63488 00:21:51.725 }, 00:21:51.725 { 00:21:51.725 "name": "BaseBdev3", 00:21:51.725 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:51.725 "is_configured": true, 00:21:51.725 "data_offset": 2048, 00:21:51.725 "data_size": 63488 00:21:51.725 }, 00:21:51.725 { 00:21:51.726 "name": "BaseBdev4", 00:21:51.726 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:51.726 "is_configured": true, 00:21:51.726 "data_offset": 2048, 00:21:51.726 "data_size": 63488 00:21:51.726 } 00:21:51.726 ] 00:21:51.726 }' 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.726 [2024-11-06 09:14:50.663162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:51.726 [2024-11-06 09:14:50.663354] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:51.726 [2024-11-06 09:14:50.663371] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:51.726 request: 00:21:51.726 { 00:21:51.726 "base_bdev": "BaseBdev1", 00:21:51.726 "raid_bdev": "raid_bdev1", 00:21:51.726 "method": "bdev_raid_add_base_bdev", 00:21:51.726 "req_id": 1 00:21:51.726 } 00:21:51.726 Got JSON-RPC error response 00:21:51.726 response: 00:21:51.726 { 00:21:51.726 "code": -22, 00:21:51.726 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:51.726 } 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:51.726 09:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.664 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:52.922 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.922 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.922 "name": "raid_bdev1", 00:21:52.922 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:52.922 "strip_size_kb": 0, 00:21:52.922 "state": "online", 00:21:52.922 "raid_level": "raid1", 00:21:52.922 "superblock": true, 00:21:52.922 "num_base_bdevs": 4, 00:21:52.922 "num_base_bdevs_discovered": 2, 00:21:52.922 "num_base_bdevs_operational": 2, 00:21:52.922 "base_bdevs_list": [ 00:21:52.922 { 00:21:52.922 "name": null, 00:21:52.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.922 "is_configured": false, 00:21:52.922 "data_offset": 0, 00:21:52.922 "data_size": 63488 00:21:52.922 }, 00:21:52.922 { 00:21:52.922 "name": null, 00:21:52.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.922 "is_configured": false, 00:21:52.922 "data_offset": 2048, 00:21:52.922 "data_size": 63488 00:21:52.922 }, 00:21:52.922 { 00:21:52.922 "name": "BaseBdev3", 00:21:52.922 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:52.922 "is_configured": true, 00:21:52.922 "data_offset": 2048, 00:21:52.922 "data_size": 63488 00:21:52.922 }, 00:21:52.922 { 00:21:52.922 "name": "BaseBdev4", 00:21:52.922 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:52.922 "is_configured": true, 00:21:52.922 "data_offset": 2048, 00:21:52.922 "data_size": 63488 00:21:52.922 } 00:21:52.922 ] 00:21:52.922 }' 00:21:52.922 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.922 09:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.180 "name": "raid_bdev1", 00:21:53.180 "uuid": "94cad8e6-9b31-4f3b-a267-d6b518c6bd4d", 00:21:53.180 "strip_size_kb": 0, 00:21:53.180 "state": "online", 00:21:53.180 "raid_level": "raid1", 00:21:53.180 "superblock": true, 00:21:53.180 "num_base_bdevs": 4, 00:21:53.180 "num_base_bdevs_discovered": 2, 00:21:53.180 "num_base_bdevs_operational": 2, 00:21:53.180 "base_bdevs_list": [ 00:21:53.180 { 00:21:53.180 "name": null, 00:21:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.180 "is_configured": false, 00:21:53.180 "data_offset": 0, 00:21:53.180 "data_size": 63488 00:21:53.180 }, 00:21:53.180 { 00:21:53.180 "name": null, 00:21:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.180 "is_configured": false, 00:21:53.180 "data_offset": 2048, 00:21:53.180 "data_size": 63488 00:21:53.180 }, 00:21:53.180 { 00:21:53.180 "name": "BaseBdev3", 00:21:53.180 "uuid": "935d35d4-7c80-525a-bae7-563338bc24a5", 00:21:53.180 "is_configured": true, 00:21:53.180 "data_offset": 2048, 00:21:53.180 "data_size": 63488 00:21:53.180 }, 00:21:53.180 { 00:21:53.180 "name": "BaseBdev4", 00:21:53.180 "uuid": "6df99445-7df5-5fc7-9abc-ced27904bf4a", 00:21:53.180 "is_configured": true, 00:21:53.180 "data_offset": 2048, 00:21:53.180 "data_size": 63488 00:21:53.180 } 00:21:53.180 ] 00:21:53.180 }' 00:21:53.180 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78889 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 78889 ']' 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 78889 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78889 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:53.439 killing process with pid 78889 00:21:53.439 Received shutdown signal, test time was about 17.505069 seconds 00:21:53.439 00:21:53.439 Latency(us) 00:21:53.439 [2024-11-06T09:14:52.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.439 [2024-11-06T09:14:52.479Z] =================================================================================================================== 00:21:53.439 [2024-11-06T09:14:52.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78889' 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 78889 00:21:53.439 [2024-11-06 09:14:52.327176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:53.439 [2024-11-06 09:14:52.327320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.439 09:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 78889 00:21:53.439 [2024-11-06 09:14:52.327392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:53.439 [2024-11-06 09:14:52.327403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:54.006 [2024-11-06 09:14:52.756373] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.944 09:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:54.944 00:21:54.944 real 0m20.943s 00:21:54.944 user 0m26.953s 00:21:54.944 sys 0m2.954s 00:21:54.944 09:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:54.944 ************************************ 00:21:54.944 END TEST raid_rebuild_test_sb_io 00:21:54.944 ************************************ 00:21:54.944 09:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.202 09:14:54 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:55.202 09:14:54 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:55.202 09:14:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:55.202 09:14:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:55.202 09:14:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:55.202 ************************************ 00:21:55.202 START TEST raid5f_state_function_test 00:21:55.203 ************************************ 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79605 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:55.203 Process raid pid: 79605 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79605' 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79605 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 79605 ']' 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:55.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:55.203 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.203 [2024-11-06 09:14:54.132288] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:21:55.203 [2024-11-06 09:14:54.132422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.461 [2024-11-06 09:14:54.312441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.461 [2024-11-06 09:14:54.432591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.719 [2024-11-06 09:14:54.639883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.719 [2024-11-06 09:14:54.639926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.977 [2024-11-06 09:14:54.972370] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:55.977 [2024-11-06 09:14:54.972426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:55.977 [2024-11-06 09:14:54.972437] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:55.977 [2024-11-06 09:14:54.972451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:55.977 [2024-11-06 09:14:54.972465] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:55.977 [2024-11-06 09:14:54.972477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.977 09:14:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.977 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.244 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.244 "name": "Existed_Raid", 00:21:56.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.244 "strip_size_kb": 64, 00:21:56.244 "state": "configuring", 00:21:56.244 "raid_level": "raid5f", 00:21:56.244 "superblock": false, 00:21:56.244 "num_base_bdevs": 3, 00:21:56.244 "num_base_bdevs_discovered": 0, 00:21:56.244 "num_base_bdevs_operational": 3, 00:21:56.244 "base_bdevs_list": [ 00:21:56.244 { 00:21:56.244 "name": "BaseBdev1", 00:21:56.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.244 "is_configured": false, 00:21:56.244 "data_offset": 0, 00:21:56.244 "data_size": 0 00:21:56.244 }, 00:21:56.244 { 00:21:56.244 "name": "BaseBdev2", 00:21:56.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.244 "is_configured": false, 00:21:56.244 "data_offset": 0, 00:21:56.244 "data_size": 0 00:21:56.244 }, 00:21:56.244 { 00:21:56.244 "name": "BaseBdev3", 00:21:56.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.244 "is_configured": false, 00:21:56.244 "data_offset": 0, 00:21:56.244 "data_size": 0 00:21:56.244 } 00:21:56.244 ] 00:21:56.244 }' 00:21:56.244 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.244 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 [2024-11-06 09:14:55.367765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:56.503 [2024-11-06 09:14:55.367810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 [2024-11-06 09:14:55.379735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:56.503 [2024-11-06 09:14:55.379786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:56.503 [2024-11-06 09:14:55.379798] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:56.503 [2024-11-06 09:14:55.379810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:56.503 [2024-11-06 09:14:55.379818] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:56.503 [2024-11-06 09:14:55.379830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 [2024-11-06 09:14:55.431570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.503 BaseBdev1 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.503 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 [ 00:21:56.504 { 00:21:56.504 "name": "BaseBdev1", 00:21:56.504 "aliases": [ 00:21:56.504 "f805b929-29a5-416c-935c-5ea2fc1cd96d" 00:21:56.504 ], 00:21:56.504 "product_name": "Malloc disk", 00:21:56.504 "block_size": 512, 00:21:56.504 "num_blocks": 65536, 00:21:56.504 "uuid": "f805b929-29a5-416c-935c-5ea2fc1cd96d", 00:21:56.504 "assigned_rate_limits": { 00:21:56.504 "rw_ios_per_sec": 0, 00:21:56.504 "rw_mbytes_per_sec": 0, 00:21:56.504 "r_mbytes_per_sec": 0, 00:21:56.504 "w_mbytes_per_sec": 0 00:21:56.504 }, 00:21:56.504 "claimed": true, 00:21:56.504 "claim_type": "exclusive_write", 00:21:56.504 "zoned": false, 00:21:56.504 "supported_io_types": { 00:21:56.504 "read": true, 00:21:56.504 "write": true, 00:21:56.504 "unmap": true, 00:21:56.504 "flush": true, 00:21:56.504 "reset": true, 00:21:56.504 "nvme_admin": false, 00:21:56.504 "nvme_io": false, 00:21:56.504 "nvme_io_md": false, 00:21:56.504 "write_zeroes": true, 00:21:56.504 "zcopy": true, 00:21:56.504 "get_zone_info": false, 00:21:56.504 "zone_management": false, 00:21:56.504 "zone_append": false, 00:21:56.504 "compare": false, 00:21:56.504 "compare_and_write": false, 00:21:56.504 "abort": true, 00:21:56.504 "seek_hole": false, 00:21:56.504 "seek_data": false, 00:21:56.504 "copy": true, 00:21:56.504 "nvme_iov_md": false 00:21:56.504 }, 00:21:56.504 "memory_domains": [ 00:21:56.504 { 00:21:56.504 "dma_device_id": "system", 00:21:56.504 "dma_device_type": 1 00:21:56.504 }, 00:21:56.504 { 00:21:56.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.504 "dma_device_type": 2 00:21:56.504 } 00:21:56.504 ], 00:21:56.504 "driver_specific": {} 00:21:56.504 } 00:21:56.504 ] 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.504 "name": "Existed_Raid", 00:21:56.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.504 "strip_size_kb": 64, 00:21:56.504 "state": "configuring", 00:21:56.504 "raid_level": "raid5f", 00:21:56.504 "superblock": false, 00:21:56.504 "num_base_bdevs": 3, 00:21:56.504 "num_base_bdevs_discovered": 1, 00:21:56.504 "num_base_bdevs_operational": 3, 00:21:56.504 "base_bdevs_list": [ 00:21:56.504 { 00:21:56.504 "name": "BaseBdev1", 00:21:56.504 "uuid": "f805b929-29a5-416c-935c-5ea2fc1cd96d", 00:21:56.504 "is_configured": true, 00:21:56.504 "data_offset": 0, 00:21:56.504 "data_size": 65536 00:21:56.504 }, 00:21:56.504 { 00:21:56.504 "name": "BaseBdev2", 00:21:56.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.504 "is_configured": false, 00:21:56.504 "data_offset": 0, 00:21:56.504 "data_size": 0 00:21:56.504 }, 00:21:56.504 { 00:21:56.504 "name": "BaseBdev3", 00:21:56.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.504 "is_configured": false, 00:21:56.504 "data_offset": 0, 00:21:56.504 "data_size": 0 00:21:56.504 } 00:21:56.504 ] 00:21:56.504 }' 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.504 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.072 [2024-11-06 09:14:55.895432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:57.072 [2024-11-06 09:14:55.895493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.072 [2024-11-06 09:14:55.907473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.072 [2024-11-06 09:14:55.909529] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:57.072 [2024-11-06 09:14:55.909575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:57.072 [2024-11-06 09:14:55.909587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:57.072 [2024-11-06 09:14:55.909599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.072 "name": "Existed_Raid", 00:21:57.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.072 "strip_size_kb": 64, 00:21:57.072 "state": "configuring", 00:21:57.072 "raid_level": "raid5f", 00:21:57.072 "superblock": false, 00:21:57.072 "num_base_bdevs": 3, 00:21:57.072 "num_base_bdevs_discovered": 1, 00:21:57.072 "num_base_bdevs_operational": 3, 00:21:57.072 "base_bdevs_list": [ 00:21:57.072 { 00:21:57.072 "name": "BaseBdev1", 00:21:57.072 "uuid": "f805b929-29a5-416c-935c-5ea2fc1cd96d", 00:21:57.072 "is_configured": true, 00:21:57.072 "data_offset": 0, 00:21:57.072 "data_size": 65536 00:21:57.072 }, 00:21:57.072 { 00:21:57.072 "name": "BaseBdev2", 00:21:57.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.072 "is_configured": false, 00:21:57.072 "data_offset": 0, 00:21:57.072 "data_size": 0 00:21:57.072 }, 00:21:57.072 { 00:21:57.072 "name": "BaseBdev3", 00:21:57.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.072 "is_configured": false, 00:21:57.072 "data_offset": 0, 00:21:57.072 "data_size": 0 00:21:57.072 } 00:21:57.072 ] 00:21:57.072 }' 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.072 09:14:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.330 [2024-11-06 09:14:56.333793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.330 BaseBdev2 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.330 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.331 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:57.331 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.331 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.331 [ 00:21:57.331 { 00:21:57.331 "name": "BaseBdev2", 00:21:57.331 "aliases": [ 00:21:57.331 "05dd27cc-4919-44ba-9cef-cfc292effc19" 00:21:57.331 ], 00:21:57.331 "product_name": "Malloc disk", 00:21:57.331 "block_size": 512, 00:21:57.331 "num_blocks": 65536, 00:21:57.331 "uuid": "05dd27cc-4919-44ba-9cef-cfc292effc19", 00:21:57.331 "assigned_rate_limits": { 00:21:57.331 "rw_ios_per_sec": 0, 00:21:57.331 "rw_mbytes_per_sec": 0, 00:21:57.331 "r_mbytes_per_sec": 0, 00:21:57.331 "w_mbytes_per_sec": 0 00:21:57.331 }, 00:21:57.331 "claimed": true, 00:21:57.331 "claim_type": "exclusive_write", 00:21:57.331 "zoned": false, 00:21:57.331 "supported_io_types": { 00:21:57.331 "read": true, 00:21:57.331 "write": true, 00:21:57.331 "unmap": true, 00:21:57.331 "flush": true, 00:21:57.331 "reset": true, 00:21:57.331 "nvme_admin": false, 00:21:57.331 "nvme_io": false, 00:21:57.331 "nvme_io_md": false, 00:21:57.331 "write_zeroes": true, 00:21:57.590 "zcopy": true, 00:21:57.590 "get_zone_info": false, 00:21:57.590 "zone_management": false, 00:21:57.590 "zone_append": false, 00:21:57.590 "compare": false, 00:21:57.590 "compare_and_write": false, 00:21:57.590 "abort": true, 00:21:57.590 "seek_hole": false, 00:21:57.590 "seek_data": false, 00:21:57.590 "copy": true, 00:21:57.590 "nvme_iov_md": false 00:21:57.590 }, 00:21:57.590 "memory_domains": [ 00:21:57.590 { 00:21:57.590 "dma_device_id": "system", 00:21:57.590 "dma_device_type": 1 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.590 "dma_device_type": 2 00:21:57.590 } 00:21:57.590 ], 00:21:57.590 "driver_specific": {} 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.590 "name": "Existed_Raid", 00:21:57.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.590 "strip_size_kb": 64, 00:21:57.590 "state": "configuring", 00:21:57.590 "raid_level": "raid5f", 00:21:57.590 "superblock": false, 00:21:57.590 "num_base_bdevs": 3, 00:21:57.590 "num_base_bdevs_discovered": 2, 00:21:57.590 "num_base_bdevs_operational": 3, 00:21:57.590 "base_bdevs_list": [ 00:21:57.590 { 00:21:57.590 "name": "BaseBdev1", 00:21:57.590 "uuid": "f805b929-29a5-416c-935c-5ea2fc1cd96d", 00:21:57.590 "is_configured": true, 00:21:57.590 "data_offset": 0, 00:21:57.590 "data_size": 65536 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "name": "BaseBdev2", 00:21:57.590 "uuid": "05dd27cc-4919-44ba-9cef-cfc292effc19", 00:21:57.590 "is_configured": true, 00:21:57.590 "data_offset": 0, 00:21:57.590 "data_size": 65536 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "name": "BaseBdev3", 00:21:57.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.590 "is_configured": false, 00:21:57.590 "data_offset": 0, 00:21:57.590 "data_size": 0 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 }' 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.590 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.849 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:57.849 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.849 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.849 [2024-11-06 09:14:56.831491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.849 [2024-11-06 09:14:56.831577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:57.850 [2024-11-06 09:14:56.831597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:57.850 [2024-11-06 09:14:56.831893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:57.850 [2024-11-06 09:14:56.837612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:57.850 [2024-11-06 09:14:56.837638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:57.850 [2024-11-06 09:14:56.837931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.850 BaseBdev3 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.850 [ 00:21:57.850 { 00:21:57.850 "name": "BaseBdev3", 00:21:57.850 "aliases": [ 00:21:57.850 "aec69eab-8145-428f-9176-bb80f822fd3e" 00:21:57.850 ], 00:21:57.850 "product_name": "Malloc disk", 00:21:57.850 "block_size": 512, 00:21:57.850 "num_blocks": 65536, 00:21:57.850 "uuid": "aec69eab-8145-428f-9176-bb80f822fd3e", 00:21:57.850 "assigned_rate_limits": { 00:21:57.850 "rw_ios_per_sec": 0, 00:21:57.850 "rw_mbytes_per_sec": 0, 00:21:57.850 "r_mbytes_per_sec": 0, 00:21:57.850 "w_mbytes_per_sec": 0 00:21:57.850 }, 00:21:57.850 "claimed": true, 00:21:57.850 "claim_type": "exclusive_write", 00:21:57.850 "zoned": false, 00:21:57.850 "supported_io_types": { 00:21:57.850 "read": true, 00:21:57.850 "write": true, 00:21:57.850 "unmap": true, 00:21:57.850 "flush": true, 00:21:57.850 "reset": true, 00:21:57.850 "nvme_admin": false, 00:21:57.850 "nvme_io": false, 00:21:57.850 "nvme_io_md": false, 00:21:57.850 "write_zeroes": true, 00:21:57.850 "zcopy": true, 00:21:57.850 "get_zone_info": false, 00:21:57.850 "zone_management": false, 00:21:57.850 "zone_append": false, 00:21:57.850 "compare": false, 00:21:57.850 "compare_and_write": false, 00:21:57.850 "abort": true, 00:21:57.850 "seek_hole": false, 00:21:57.850 "seek_data": false, 00:21:57.850 "copy": true, 00:21:57.850 "nvme_iov_md": false 00:21:57.850 }, 00:21:57.850 "memory_domains": [ 00:21:57.850 { 00:21:57.850 "dma_device_id": "system", 00:21:57.850 "dma_device_type": 1 00:21:57.850 }, 00:21:57.850 { 00:21:57.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.850 "dma_device_type": 2 00:21:57.850 } 00:21:57.850 ], 00:21:57.850 "driver_specific": {} 00:21:57.850 } 00:21:57.850 ] 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.850 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.110 "name": "Existed_Raid", 00:21:58.110 "uuid": "0710dd45-5068-4e06-8a4b-33df8b174f6e", 00:21:58.110 "strip_size_kb": 64, 00:21:58.110 "state": "online", 00:21:58.110 "raid_level": "raid5f", 00:21:58.110 "superblock": false, 00:21:58.110 "num_base_bdevs": 3, 00:21:58.110 "num_base_bdevs_discovered": 3, 00:21:58.110 "num_base_bdevs_operational": 3, 00:21:58.110 "base_bdevs_list": [ 00:21:58.110 { 00:21:58.110 "name": "BaseBdev1", 00:21:58.110 "uuid": "f805b929-29a5-416c-935c-5ea2fc1cd96d", 00:21:58.110 "is_configured": true, 00:21:58.110 "data_offset": 0, 00:21:58.110 "data_size": 65536 00:21:58.110 }, 00:21:58.110 { 00:21:58.110 "name": "BaseBdev2", 00:21:58.110 "uuid": "05dd27cc-4919-44ba-9cef-cfc292effc19", 00:21:58.110 "is_configured": true, 00:21:58.110 "data_offset": 0, 00:21:58.110 "data_size": 65536 00:21:58.110 }, 00:21:58.110 { 00:21:58.110 "name": "BaseBdev3", 00:21:58.110 "uuid": "aec69eab-8145-428f-9176-bb80f822fd3e", 00:21:58.110 "is_configured": true, 00:21:58.110 "data_offset": 0, 00:21:58.110 "data_size": 65536 00:21:58.110 } 00:21:58.110 ] 00:21:58.110 }' 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.110 09:14:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.369 [2024-11-06 09:14:57.303760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:58.369 "name": "Existed_Raid", 00:21:58.369 "aliases": [ 00:21:58.369 "0710dd45-5068-4e06-8a4b-33df8b174f6e" 00:21:58.369 ], 00:21:58.369 "product_name": "Raid Volume", 00:21:58.369 "block_size": 512, 00:21:58.369 "num_blocks": 131072, 00:21:58.369 "uuid": "0710dd45-5068-4e06-8a4b-33df8b174f6e", 00:21:58.369 "assigned_rate_limits": { 00:21:58.369 "rw_ios_per_sec": 0, 00:21:58.369 "rw_mbytes_per_sec": 0, 00:21:58.369 "r_mbytes_per_sec": 0, 00:21:58.369 "w_mbytes_per_sec": 0 00:21:58.369 }, 00:21:58.369 "claimed": false, 00:21:58.369 "zoned": false, 00:21:58.369 "supported_io_types": { 00:21:58.369 "read": true, 00:21:58.369 "write": true, 00:21:58.369 "unmap": false, 00:21:58.369 "flush": false, 00:21:58.369 "reset": true, 00:21:58.369 "nvme_admin": false, 00:21:58.369 "nvme_io": false, 00:21:58.369 "nvme_io_md": false, 00:21:58.369 "write_zeroes": true, 00:21:58.369 "zcopy": false, 00:21:58.369 "get_zone_info": false, 00:21:58.369 "zone_management": false, 00:21:58.369 "zone_append": false, 00:21:58.369 "compare": false, 00:21:58.369 "compare_and_write": false, 00:21:58.369 "abort": false, 00:21:58.369 "seek_hole": false, 00:21:58.369 "seek_data": false, 00:21:58.369 "copy": false, 00:21:58.369 "nvme_iov_md": false 00:21:58.369 }, 00:21:58.369 "driver_specific": { 00:21:58.369 "raid": { 00:21:58.369 "uuid": "0710dd45-5068-4e06-8a4b-33df8b174f6e", 00:21:58.369 "strip_size_kb": 64, 00:21:58.369 "state": "online", 00:21:58.369 "raid_level": "raid5f", 00:21:58.369 "superblock": false, 00:21:58.369 "num_base_bdevs": 3, 00:21:58.369 "num_base_bdevs_discovered": 3, 00:21:58.369 "num_base_bdevs_operational": 3, 00:21:58.369 "base_bdevs_list": [ 00:21:58.369 { 00:21:58.369 "name": "BaseBdev1", 00:21:58.369 "uuid": "f805b929-29a5-416c-935c-5ea2fc1cd96d", 00:21:58.369 "is_configured": true, 00:21:58.369 "data_offset": 0, 00:21:58.369 "data_size": 65536 00:21:58.369 }, 00:21:58.369 { 00:21:58.369 "name": "BaseBdev2", 00:21:58.369 "uuid": "05dd27cc-4919-44ba-9cef-cfc292effc19", 00:21:58.369 "is_configured": true, 00:21:58.369 "data_offset": 0, 00:21:58.369 "data_size": 65536 00:21:58.369 }, 00:21:58.369 { 00:21:58.369 "name": "BaseBdev3", 00:21:58.369 "uuid": "aec69eab-8145-428f-9176-bb80f822fd3e", 00:21:58.369 "is_configured": true, 00:21:58.369 "data_offset": 0, 00:21:58.369 "data_size": 65536 00:21:58.369 } 00:21:58.369 ] 00:21:58.369 } 00:21:58.369 } 00:21:58.369 }' 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:58.369 BaseBdev2 00:21:58.369 BaseBdev3' 00:21:58.369 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.628 [2024-11-06 09:14:57.539259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.628 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.963 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.963 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.963 "name": "Existed_Raid", 00:21:58.963 "uuid": "0710dd45-5068-4e06-8a4b-33df8b174f6e", 00:21:58.963 "strip_size_kb": 64, 00:21:58.963 "state": "online", 00:21:58.963 "raid_level": "raid5f", 00:21:58.963 "superblock": false, 00:21:58.963 "num_base_bdevs": 3, 00:21:58.963 "num_base_bdevs_discovered": 2, 00:21:58.963 "num_base_bdevs_operational": 2, 00:21:58.963 "base_bdevs_list": [ 00:21:58.963 { 00:21:58.963 "name": null, 00:21:58.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.963 "is_configured": false, 00:21:58.963 "data_offset": 0, 00:21:58.963 "data_size": 65536 00:21:58.963 }, 00:21:58.963 { 00:21:58.963 "name": "BaseBdev2", 00:21:58.963 "uuid": "05dd27cc-4919-44ba-9cef-cfc292effc19", 00:21:58.963 "is_configured": true, 00:21:58.963 "data_offset": 0, 00:21:58.963 "data_size": 65536 00:21:58.963 }, 00:21:58.963 { 00:21:58.963 "name": "BaseBdev3", 00:21:58.963 "uuid": "aec69eab-8145-428f-9176-bb80f822fd3e", 00:21:58.964 "is_configured": true, 00:21:58.964 "data_offset": 0, 00:21:58.964 "data_size": 65536 00:21:58.964 } 00:21:58.964 ] 00:21:58.964 }' 00:21:58.964 09:14:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.964 09:14:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.223 [2024-11-06 09:14:58.116297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.223 [2024-11-06 09:14:58.116404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.223 [2024-11-06 09:14:58.212411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:59.223 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.224 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.481 [2024-11-06 09:14:58.268463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:59.481 [2024-11-06 09:14:58.268521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.481 BaseBdev2 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:59.481 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.482 [ 00:21:59.482 { 00:21:59.482 "name": "BaseBdev2", 00:21:59.482 "aliases": [ 00:21:59.482 "67ad6331-4f73-493c-b8c9-bdb5587fa0e2" 00:21:59.482 ], 00:21:59.482 "product_name": "Malloc disk", 00:21:59.482 "block_size": 512, 00:21:59.482 "num_blocks": 65536, 00:21:59.482 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:21:59.482 "assigned_rate_limits": { 00:21:59.482 "rw_ios_per_sec": 0, 00:21:59.482 "rw_mbytes_per_sec": 0, 00:21:59.482 "r_mbytes_per_sec": 0, 00:21:59.482 "w_mbytes_per_sec": 0 00:21:59.482 }, 00:21:59.482 "claimed": false, 00:21:59.482 "zoned": false, 00:21:59.482 "supported_io_types": { 00:21:59.482 "read": true, 00:21:59.482 "write": true, 00:21:59.482 "unmap": true, 00:21:59.482 "flush": true, 00:21:59.482 "reset": true, 00:21:59.482 "nvme_admin": false, 00:21:59.482 "nvme_io": false, 00:21:59.482 "nvme_io_md": false, 00:21:59.482 "write_zeroes": true, 00:21:59.482 "zcopy": true, 00:21:59.482 "get_zone_info": false, 00:21:59.482 "zone_management": false, 00:21:59.482 "zone_append": false, 00:21:59.482 "compare": false, 00:21:59.482 "compare_and_write": false, 00:21:59.482 "abort": true, 00:21:59.482 "seek_hole": false, 00:21:59.482 "seek_data": false, 00:21:59.482 "copy": true, 00:21:59.482 "nvme_iov_md": false 00:21:59.482 }, 00:21:59.482 "memory_domains": [ 00:21:59.482 { 00:21:59.482 "dma_device_id": "system", 00:21:59.482 "dma_device_type": 1 00:21:59.482 }, 00:21:59.482 { 00:21:59.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.482 "dma_device_type": 2 00:21:59.482 } 00:21:59.482 ], 00:21:59.482 "driver_specific": {} 00:21:59.482 } 00:21:59.482 ] 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.482 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 BaseBdev3 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 [ 00:21:59.741 { 00:21:59.741 "name": "BaseBdev3", 00:21:59.741 "aliases": [ 00:21:59.741 "62217b89-0250-432e-b866-93cd486a67e1" 00:21:59.741 ], 00:21:59.741 "product_name": "Malloc disk", 00:21:59.741 "block_size": 512, 00:21:59.741 "num_blocks": 65536, 00:21:59.741 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:21:59.741 "assigned_rate_limits": { 00:21:59.741 "rw_ios_per_sec": 0, 00:21:59.741 "rw_mbytes_per_sec": 0, 00:21:59.741 "r_mbytes_per_sec": 0, 00:21:59.741 "w_mbytes_per_sec": 0 00:21:59.741 }, 00:21:59.741 "claimed": false, 00:21:59.741 "zoned": false, 00:21:59.741 "supported_io_types": { 00:21:59.741 "read": true, 00:21:59.741 "write": true, 00:21:59.741 "unmap": true, 00:21:59.741 "flush": true, 00:21:59.741 "reset": true, 00:21:59.741 "nvme_admin": false, 00:21:59.741 "nvme_io": false, 00:21:59.741 "nvme_io_md": false, 00:21:59.741 "write_zeroes": true, 00:21:59.741 "zcopy": true, 00:21:59.741 "get_zone_info": false, 00:21:59.741 "zone_management": false, 00:21:59.741 "zone_append": false, 00:21:59.741 "compare": false, 00:21:59.741 "compare_and_write": false, 00:21:59.741 "abort": true, 00:21:59.741 "seek_hole": false, 00:21:59.741 "seek_data": false, 00:21:59.741 "copy": true, 00:21:59.741 "nvme_iov_md": false 00:21:59.741 }, 00:21:59.741 "memory_domains": [ 00:21:59.741 { 00:21:59.741 "dma_device_id": "system", 00:21:59.741 "dma_device_type": 1 00:21:59.741 }, 00:21:59.741 { 00:21:59.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.741 "dma_device_type": 2 00:21:59.741 } 00:21:59.741 ], 00:21:59.741 "driver_specific": {} 00:21:59.741 } 00:21:59.741 ] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 [2024-11-06 09:14:58.587313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.741 [2024-11-06 09:14:58.587362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.741 [2024-11-06 09:14:58.587390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:59.741 [2024-11-06 09:14:58.589492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.741 "name": "Existed_Raid", 00:21:59.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.741 "strip_size_kb": 64, 00:21:59.741 "state": "configuring", 00:21:59.741 "raid_level": "raid5f", 00:21:59.741 "superblock": false, 00:21:59.741 "num_base_bdevs": 3, 00:21:59.741 "num_base_bdevs_discovered": 2, 00:21:59.741 "num_base_bdevs_operational": 3, 00:21:59.741 "base_bdevs_list": [ 00:21:59.741 { 00:21:59.741 "name": "BaseBdev1", 00:21:59.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.741 "is_configured": false, 00:21:59.741 "data_offset": 0, 00:21:59.741 "data_size": 0 00:21:59.741 }, 00:21:59.741 { 00:21:59.741 "name": "BaseBdev2", 00:21:59.741 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:21:59.741 "is_configured": true, 00:21:59.741 "data_offset": 0, 00:21:59.741 "data_size": 65536 00:21:59.741 }, 00:21:59.741 { 00:21:59.741 "name": "BaseBdev3", 00:21:59.741 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:21:59.741 "is_configured": true, 00:21:59.741 "data_offset": 0, 00:21:59.741 "data_size": 65536 00:21:59.741 } 00:21:59.741 ] 00:21:59.741 }' 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.741 09:14:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.002 [2024-11-06 09:14:59.010678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.002 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.267 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.267 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.267 "name": "Existed_Raid", 00:22:00.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.267 "strip_size_kb": 64, 00:22:00.267 "state": "configuring", 00:22:00.267 "raid_level": "raid5f", 00:22:00.267 "superblock": false, 00:22:00.267 "num_base_bdevs": 3, 00:22:00.267 "num_base_bdevs_discovered": 1, 00:22:00.267 "num_base_bdevs_operational": 3, 00:22:00.267 "base_bdevs_list": [ 00:22:00.267 { 00:22:00.267 "name": "BaseBdev1", 00:22:00.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.267 "is_configured": false, 00:22:00.267 "data_offset": 0, 00:22:00.267 "data_size": 0 00:22:00.267 }, 00:22:00.267 { 00:22:00.267 "name": null, 00:22:00.267 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:00.267 "is_configured": false, 00:22:00.267 "data_offset": 0, 00:22:00.267 "data_size": 65536 00:22:00.267 }, 00:22:00.267 { 00:22:00.267 "name": "BaseBdev3", 00:22:00.267 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:00.267 "is_configured": true, 00:22:00.267 "data_offset": 0, 00:22:00.267 "data_size": 65536 00:22:00.267 } 00:22:00.267 ] 00:22:00.267 }' 00:22:00.267 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.267 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.527 [2024-11-06 09:14:59.471772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.527 BaseBdev1 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.527 [ 00:22:00.527 { 00:22:00.527 "name": "BaseBdev1", 00:22:00.527 "aliases": [ 00:22:00.527 "24f80695-a5b2-49a7-9c4c-27151158b54b" 00:22:00.527 ], 00:22:00.527 "product_name": "Malloc disk", 00:22:00.527 "block_size": 512, 00:22:00.527 "num_blocks": 65536, 00:22:00.527 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:00.527 "assigned_rate_limits": { 00:22:00.527 "rw_ios_per_sec": 0, 00:22:00.527 "rw_mbytes_per_sec": 0, 00:22:00.527 "r_mbytes_per_sec": 0, 00:22:00.527 "w_mbytes_per_sec": 0 00:22:00.527 }, 00:22:00.527 "claimed": true, 00:22:00.527 "claim_type": "exclusive_write", 00:22:00.527 "zoned": false, 00:22:00.527 "supported_io_types": { 00:22:00.527 "read": true, 00:22:00.527 "write": true, 00:22:00.527 "unmap": true, 00:22:00.527 "flush": true, 00:22:00.527 "reset": true, 00:22:00.527 "nvme_admin": false, 00:22:00.527 "nvme_io": false, 00:22:00.527 "nvme_io_md": false, 00:22:00.527 "write_zeroes": true, 00:22:00.527 "zcopy": true, 00:22:00.527 "get_zone_info": false, 00:22:00.527 "zone_management": false, 00:22:00.527 "zone_append": false, 00:22:00.527 "compare": false, 00:22:00.527 "compare_and_write": false, 00:22:00.527 "abort": true, 00:22:00.527 "seek_hole": false, 00:22:00.527 "seek_data": false, 00:22:00.527 "copy": true, 00:22:00.527 "nvme_iov_md": false 00:22:00.527 }, 00:22:00.527 "memory_domains": [ 00:22:00.527 { 00:22:00.527 "dma_device_id": "system", 00:22:00.527 "dma_device_type": 1 00:22:00.527 }, 00:22:00.527 { 00:22:00.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.527 "dma_device_type": 2 00:22:00.527 } 00:22:00.527 ], 00:22:00.527 "driver_specific": {} 00:22:00.527 } 00:22:00.527 ] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.527 "name": "Existed_Raid", 00:22:00.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.527 "strip_size_kb": 64, 00:22:00.527 "state": "configuring", 00:22:00.527 "raid_level": "raid5f", 00:22:00.527 "superblock": false, 00:22:00.527 "num_base_bdevs": 3, 00:22:00.527 "num_base_bdevs_discovered": 2, 00:22:00.527 "num_base_bdevs_operational": 3, 00:22:00.527 "base_bdevs_list": [ 00:22:00.527 { 00:22:00.527 "name": "BaseBdev1", 00:22:00.527 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:00.527 "is_configured": true, 00:22:00.527 "data_offset": 0, 00:22:00.527 "data_size": 65536 00:22:00.527 }, 00:22:00.527 { 00:22:00.527 "name": null, 00:22:00.527 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:00.527 "is_configured": false, 00:22:00.527 "data_offset": 0, 00:22:00.527 "data_size": 65536 00:22:00.527 }, 00:22:00.527 { 00:22:00.527 "name": "BaseBdev3", 00:22:00.527 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:00.527 "is_configured": true, 00:22:00.527 "data_offset": 0, 00:22:00.527 "data_size": 65536 00:22:00.527 } 00:22:00.527 ] 00:22:00.527 }' 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.527 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.095 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:01.095 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.095 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.095 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.095 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.096 [2024-11-06 09:14:59.959187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.096 09:14:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.096 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.096 "name": "Existed_Raid", 00:22:01.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.096 "strip_size_kb": 64, 00:22:01.096 "state": "configuring", 00:22:01.096 "raid_level": "raid5f", 00:22:01.096 "superblock": false, 00:22:01.096 "num_base_bdevs": 3, 00:22:01.096 "num_base_bdevs_discovered": 1, 00:22:01.096 "num_base_bdevs_operational": 3, 00:22:01.096 "base_bdevs_list": [ 00:22:01.096 { 00:22:01.096 "name": "BaseBdev1", 00:22:01.096 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:01.096 "is_configured": true, 00:22:01.096 "data_offset": 0, 00:22:01.096 "data_size": 65536 00:22:01.096 }, 00:22:01.096 { 00:22:01.096 "name": null, 00:22:01.096 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:01.096 "is_configured": false, 00:22:01.096 "data_offset": 0, 00:22:01.096 "data_size": 65536 00:22:01.096 }, 00:22:01.096 { 00:22:01.096 "name": null, 00:22:01.096 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:01.096 "is_configured": false, 00:22:01.096 "data_offset": 0, 00:22:01.096 "data_size": 65536 00:22:01.096 } 00:22:01.096 ] 00:22:01.096 }' 00:22:01.096 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.096 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.667 [2024-11-06 09:15:00.462503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.667 "name": "Existed_Raid", 00:22:01.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.667 "strip_size_kb": 64, 00:22:01.667 "state": "configuring", 00:22:01.667 "raid_level": "raid5f", 00:22:01.667 "superblock": false, 00:22:01.667 "num_base_bdevs": 3, 00:22:01.667 "num_base_bdevs_discovered": 2, 00:22:01.667 "num_base_bdevs_operational": 3, 00:22:01.667 "base_bdevs_list": [ 00:22:01.667 { 00:22:01.667 "name": "BaseBdev1", 00:22:01.667 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:01.667 "is_configured": true, 00:22:01.667 "data_offset": 0, 00:22:01.667 "data_size": 65536 00:22:01.667 }, 00:22:01.667 { 00:22:01.667 "name": null, 00:22:01.667 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:01.667 "is_configured": false, 00:22:01.667 "data_offset": 0, 00:22:01.667 "data_size": 65536 00:22:01.667 }, 00:22:01.667 { 00:22:01.667 "name": "BaseBdev3", 00:22:01.667 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:01.667 "is_configured": true, 00:22:01.667 "data_offset": 0, 00:22:01.667 "data_size": 65536 00:22:01.667 } 00:22:01.667 ] 00:22:01.667 }' 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.667 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.939 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.939 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:01.939 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.939 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.939 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.222 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:02.222 09:15:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:02.222 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.222 09:15:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.222 [2024-11-06 09:15:00.990229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.222 "name": "Existed_Raid", 00:22:02.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.222 "strip_size_kb": 64, 00:22:02.222 "state": "configuring", 00:22:02.222 "raid_level": "raid5f", 00:22:02.222 "superblock": false, 00:22:02.222 "num_base_bdevs": 3, 00:22:02.222 "num_base_bdevs_discovered": 1, 00:22:02.222 "num_base_bdevs_operational": 3, 00:22:02.222 "base_bdevs_list": [ 00:22:02.222 { 00:22:02.222 "name": null, 00:22:02.222 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:02.222 "is_configured": false, 00:22:02.222 "data_offset": 0, 00:22:02.222 "data_size": 65536 00:22:02.222 }, 00:22:02.222 { 00:22:02.222 "name": null, 00:22:02.222 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:02.222 "is_configured": false, 00:22:02.222 "data_offset": 0, 00:22:02.222 "data_size": 65536 00:22:02.222 }, 00:22:02.222 { 00:22:02.222 "name": "BaseBdev3", 00:22:02.222 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:02.222 "is_configured": true, 00:22:02.222 "data_offset": 0, 00:22:02.222 "data_size": 65536 00:22:02.222 } 00:22:02.222 ] 00:22:02.222 }' 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.222 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.483 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.483 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.483 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.483 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:02.483 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.744 [2024-11-06 09:15:01.544458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.744 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.744 "name": "Existed_Raid", 00:22:02.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.745 "strip_size_kb": 64, 00:22:02.745 "state": "configuring", 00:22:02.745 "raid_level": "raid5f", 00:22:02.745 "superblock": false, 00:22:02.745 "num_base_bdevs": 3, 00:22:02.745 "num_base_bdevs_discovered": 2, 00:22:02.745 "num_base_bdevs_operational": 3, 00:22:02.745 "base_bdevs_list": [ 00:22:02.745 { 00:22:02.745 "name": null, 00:22:02.745 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:02.745 "is_configured": false, 00:22:02.745 "data_offset": 0, 00:22:02.745 "data_size": 65536 00:22:02.745 }, 00:22:02.745 { 00:22:02.745 "name": "BaseBdev2", 00:22:02.745 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:02.745 "is_configured": true, 00:22:02.745 "data_offset": 0, 00:22:02.745 "data_size": 65536 00:22:02.745 }, 00:22:02.745 { 00:22:02.745 "name": "BaseBdev3", 00:22:02.745 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:02.745 "is_configured": true, 00:22:02.745 "data_offset": 0, 00:22:02.745 "data_size": 65536 00:22:02.745 } 00:22:02.745 ] 00:22:02.745 }' 00:22:02.745 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.745 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.007 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.007 09:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:03.007 09:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:03.007 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 24f80695-a5b2-49a7-9c4c-27151158b54b 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.271 [2024-11-06 09:15:02.113990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:03.271 [2024-11-06 09:15:02.114051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:03.271 [2024-11-06 09:15:02.114063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:03.271 [2024-11-06 09:15:02.114372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:03.271 [2024-11-06 09:15:02.119700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:03.271 [2024-11-06 09:15:02.119727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:03.271 [2024-11-06 09:15:02.120002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.271 NewBaseBdev 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.271 [ 00:22:03.271 { 00:22:03.271 "name": "NewBaseBdev", 00:22:03.271 "aliases": [ 00:22:03.271 "24f80695-a5b2-49a7-9c4c-27151158b54b" 00:22:03.271 ], 00:22:03.271 "product_name": "Malloc disk", 00:22:03.271 "block_size": 512, 00:22:03.271 "num_blocks": 65536, 00:22:03.271 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:03.271 "assigned_rate_limits": { 00:22:03.271 "rw_ios_per_sec": 0, 00:22:03.271 "rw_mbytes_per_sec": 0, 00:22:03.271 "r_mbytes_per_sec": 0, 00:22:03.271 "w_mbytes_per_sec": 0 00:22:03.271 }, 00:22:03.271 "claimed": true, 00:22:03.271 "claim_type": "exclusive_write", 00:22:03.271 "zoned": false, 00:22:03.271 "supported_io_types": { 00:22:03.271 "read": true, 00:22:03.271 "write": true, 00:22:03.271 "unmap": true, 00:22:03.271 "flush": true, 00:22:03.271 "reset": true, 00:22:03.271 "nvme_admin": false, 00:22:03.271 "nvme_io": false, 00:22:03.271 "nvme_io_md": false, 00:22:03.271 "write_zeroes": true, 00:22:03.271 "zcopy": true, 00:22:03.271 "get_zone_info": false, 00:22:03.271 "zone_management": false, 00:22:03.271 "zone_append": false, 00:22:03.271 "compare": false, 00:22:03.271 "compare_and_write": false, 00:22:03.271 "abort": true, 00:22:03.271 "seek_hole": false, 00:22:03.271 "seek_data": false, 00:22:03.271 "copy": true, 00:22:03.271 "nvme_iov_md": false 00:22:03.271 }, 00:22:03.271 "memory_domains": [ 00:22:03.271 { 00:22:03.271 "dma_device_id": "system", 00:22:03.271 "dma_device_type": 1 00:22:03.271 }, 00:22:03.271 { 00:22:03.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.271 "dma_device_type": 2 00:22:03.271 } 00:22:03.271 ], 00:22:03.271 "driver_specific": {} 00:22:03.271 } 00:22:03.271 ] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.271 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.271 "name": "Existed_Raid", 00:22:03.271 "uuid": "c2d70482-d0d1-4fbd-a7e7-b8726d4c05a8", 00:22:03.271 "strip_size_kb": 64, 00:22:03.271 "state": "online", 00:22:03.272 "raid_level": "raid5f", 00:22:03.272 "superblock": false, 00:22:03.272 "num_base_bdevs": 3, 00:22:03.272 "num_base_bdevs_discovered": 3, 00:22:03.272 "num_base_bdevs_operational": 3, 00:22:03.272 "base_bdevs_list": [ 00:22:03.272 { 00:22:03.272 "name": "NewBaseBdev", 00:22:03.272 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:03.272 "is_configured": true, 00:22:03.272 "data_offset": 0, 00:22:03.272 "data_size": 65536 00:22:03.272 }, 00:22:03.272 { 00:22:03.272 "name": "BaseBdev2", 00:22:03.272 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:03.272 "is_configured": true, 00:22:03.272 "data_offset": 0, 00:22:03.272 "data_size": 65536 00:22:03.272 }, 00:22:03.272 { 00:22:03.272 "name": "BaseBdev3", 00:22:03.272 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:03.272 "is_configured": true, 00:22:03.272 "data_offset": 0, 00:22:03.272 "data_size": 65536 00:22:03.272 } 00:22:03.272 ] 00:22:03.272 }' 00:22:03.272 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.272 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.855 [2024-11-06 09:15:02.654466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.855 "name": "Existed_Raid", 00:22:03.855 "aliases": [ 00:22:03.855 "c2d70482-d0d1-4fbd-a7e7-b8726d4c05a8" 00:22:03.855 ], 00:22:03.855 "product_name": "Raid Volume", 00:22:03.855 "block_size": 512, 00:22:03.855 "num_blocks": 131072, 00:22:03.855 "uuid": "c2d70482-d0d1-4fbd-a7e7-b8726d4c05a8", 00:22:03.855 "assigned_rate_limits": { 00:22:03.855 "rw_ios_per_sec": 0, 00:22:03.855 "rw_mbytes_per_sec": 0, 00:22:03.855 "r_mbytes_per_sec": 0, 00:22:03.855 "w_mbytes_per_sec": 0 00:22:03.855 }, 00:22:03.855 "claimed": false, 00:22:03.855 "zoned": false, 00:22:03.855 "supported_io_types": { 00:22:03.855 "read": true, 00:22:03.855 "write": true, 00:22:03.855 "unmap": false, 00:22:03.855 "flush": false, 00:22:03.855 "reset": true, 00:22:03.855 "nvme_admin": false, 00:22:03.855 "nvme_io": false, 00:22:03.855 "nvme_io_md": false, 00:22:03.855 "write_zeroes": true, 00:22:03.855 "zcopy": false, 00:22:03.855 "get_zone_info": false, 00:22:03.855 "zone_management": false, 00:22:03.855 "zone_append": false, 00:22:03.855 "compare": false, 00:22:03.855 "compare_and_write": false, 00:22:03.855 "abort": false, 00:22:03.855 "seek_hole": false, 00:22:03.855 "seek_data": false, 00:22:03.855 "copy": false, 00:22:03.855 "nvme_iov_md": false 00:22:03.855 }, 00:22:03.855 "driver_specific": { 00:22:03.855 "raid": { 00:22:03.855 "uuid": "c2d70482-d0d1-4fbd-a7e7-b8726d4c05a8", 00:22:03.855 "strip_size_kb": 64, 00:22:03.855 "state": "online", 00:22:03.855 "raid_level": "raid5f", 00:22:03.855 "superblock": false, 00:22:03.855 "num_base_bdevs": 3, 00:22:03.855 "num_base_bdevs_discovered": 3, 00:22:03.855 "num_base_bdevs_operational": 3, 00:22:03.855 "base_bdevs_list": [ 00:22:03.855 { 00:22:03.855 "name": "NewBaseBdev", 00:22:03.855 "uuid": "24f80695-a5b2-49a7-9c4c-27151158b54b", 00:22:03.855 "is_configured": true, 00:22:03.855 "data_offset": 0, 00:22:03.855 "data_size": 65536 00:22:03.855 }, 00:22:03.855 { 00:22:03.855 "name": "BaseBdev2", 00:22:03.855 "uuid": "67ad6331-4f73-493c-b8c9-bdb5587fa0e2", 00:22:03.855 "is_configured": true, 00:22:03.855 "data_offset": 0, 00:22:03.855 "data_size": 65536 00:22:03.855 }, 00:22:03.855 { 00:22:03.855 "name": "BaseBdev3", 00:22:03.855 "uuid": "62217b89-0250-432e-b866-93cd486a67e1", 00:22:03.855 "is_configured": true, 00:22:03.855 "data_offset": 0, 00:22:03.855 "data_size": 65536 00:22:03.855 } 00:22:03.855 ] 00:22:03.855 } 00:22:03.855 } 00:22:03.855 }' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:03.855 BaseBdev2 00:22:03.855 BaseBdev3' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.855 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.125 [2024-11-06 09:15:02.946271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:04.125 [2024-11-06 09:15:02.946319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.125 [2024-11-06 09:15:02.946425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.125 [2024-11-06 09:15:02.946712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.125 [2024-11-06 09:15:02.946738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79605 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 79605 ']' 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 79605 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79605 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:04.125 killing process with pid 79605 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79605' 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 79605 00:22:04.125 [2024-11-06 09:15:03.000703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:04.125 09:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 79605 00:22:04.387 [2024-11-06 09:15:03.308585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:05.801 00:22:05.801 real 0m10.410s 00:22:05.801 user 0m16.467s 00:22:05.801 sys 0m2.196s 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.801 ************************************ 00:22:05.801 END TEST raid5f_state_function_test 00:22:05.801 ************************************ 00:22:05.801 09:15:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:05.801 09:15:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:05.801 09:15:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:05.801 09:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.801 ************************************ 00:22:05.801 START TEST raid5f_state_function_test_sb 00:22:05.801 ************************************ 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80222 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80222' 00:22:05.801 Process raid pid: 80222 00:22:05.801 09:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80222 00:22:05.802 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80222 ']' 00:22:05.802 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.802 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.802 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.802 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.802 09:15:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.802 [2024-11-06 09:15:04.616433] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:22:05.802 [2024-11-06 09:15:04.616565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.802 [2024-11-06 09:15:04.800260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.060 [2024-11-06 09:15:04.927018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.320 [2024-11-06 09:15:05.142265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.320 [2024-11-06 09:15:05.142312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.578 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:06.578 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:06.578 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:06.578 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.578 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.578 [2024-11-06 09:15:05.454185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:06.578 [2024-11-06 09:15:05.454241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:06.579 [2024-11-06 09:15:05.454253] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.579 [2024-11-06 09:15:05.454267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.579 [2024-11-06 09:15:05.454290] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.579 [2024-11-06 09:15:05.454303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.579 "name": "Existed_Raid", 00:22:06.579 "uuid": "d82ad406-43de-45d8-a17b-1553e7821dfa", 00:22:06.579 "strip_size_kb": 64, 00:22:06.579 "state": "configuring", 00:22:06.579 "raid_level": "raid5f", 00:22:06.579 "superblock": true, 00:22:06.579 "num_base_bdevs": 3, 00:22:06.579 "num_base_bdevs_discovered": 0, 00:22:06.579 "num_base_bdevs_operational": 3, 00:22:06.579 "base_bdevs_list": [ 00:22:06.579 { 00:22:06.579 "name": "BaseBdev1", 00:22:06.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.579 "is_configured": false, 00:22:06.579 "data_offset": 0, 00:22:06.579 "data_size": 0 00:22:06.579 }, 00:22:06.579 { 00:22:06.579 "name": "BaseBdev2", 00:22:06.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.579 "is_configured": false, 00:22:06.579 "data_offset": 0, 00:22:06.579 "data_size": 0 00:22:06.579 }, 00:22:06.579 { 00:22:06.579 "name": "BaseBdev3", 00:22:06.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.579 "is_configured": false, 00:22:06.579 "data_offset": 0, 00:22:06.579 "data_size": 0 00:22:06.579 } 00:22:06.579 ] 00:22:06.579 }' 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.579 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.838 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:06.838 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.838 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.838 [2024-11-06 09:15:05.873515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:06.838 [2024-11-06 09:15:05.873554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.098 [2024-11-06 09:15:05.885502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.098 [2024-11-06 09:15:05.885551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.098 [2024-11-06 09:15:05.885562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.098 [2024-11-06 09:15:05.885575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.098 [2024-11-06 09:15:05.885582] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:07.098 [2024-11-06 09:15:05.885594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.098 [2024-11-06 09:15:05.935464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.098 BaseBdev1 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.098 [ 00:22:07.098 { 00:22:07.098 "name": "BaseBdev1", 00:22:07.098 "aliases": [ 00:22:07.098 "40264524-71c9-4545-a96e-f1f507e33557" 00:22:07.098 ], 00:22:07.098 "product_name": "Malloc disk", 00:22:07.098 "block_size": 512, 00:22:07.098 "num_blocks": 65536, 00:22:07.098 "uuid": "40264524-71c9-4545-a96e-f1f507e33557", 00:22:07.098 "assigned_rate_limits": { 00:22:07.098 "rw_ios_per_sec": 0, 00:22:07.098 "rw_mbytes_per_sec": 0, 00:22:07.098 "r_mbytes_per_sec": 0, 00:22:07.098 "w_mbytes_per_sec": 0 00:22:07.098 }, 00:22:07.098 "claimed": true, 00:22:07.098 "claim_type": "exclusive_write", 00:22:07.098 "zoned": false, 00:22:07.098 "supported_io_types": { 00:22:07.098 "read": true, 00:22:07.098 "write": true, 00:22:07.098 "unmap": true, 00:22:07.098 "flush": true, 00:22:07.098 "reset": true, 00:22:07.098 "nvme_admin": false, 00:22:07.098 "nvme_io": false, 00:22:07.098 "nvme_io_md": false, 00:22:07.098 "write_zeroes": true, 00:22:07.098 "zcopy": true, 00:22:07.098 "get_zone_info": false, 00:22:07.098 "zone_management": false, 00:22:07.098 "zone_append": false, 00:22:07.098 "compare": false, 00:22:07.098 "compare_and_write": false, 00:22:07.098 "abort": true, 00:22:07.098 "seek_hole": false, 00:22:07.098 "seek_data": false, 00:22:07.098 "copy": true, 00:22:07.098 "nvme_iov_md": false 00:22:07.098 }, 00:22:07.098 "memory_domains": [ 00:22:07.098 { 00:22:07.098 "dma_device_id": "system", 00:22:07.098 "dma_device_type": 1 00:22:07.098 }, 00:22:07.098 { 00:22:07.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.098 "dma_device_type": 2 00:22:07.098 } 00:22:07.098 ], 00:22:07.098 "driver_specific": {} 00:22:07.098 } 00:22:07.098 ] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.098 09:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.098 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.098 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.098 "name": "Existed_Raid", 00:22:07.098 "uuid": "dc44170e-6152-425f-9fff-c4746b406680", 00:22:07.098 "strip_size_kb": 64, 00:22:07.098 "state": "configuring", 00:22:07.098 "raid_level": "raid5f", 00:22:07.098 "superblock": true, 00:22:07.098 "num_base_bdevs": 3, 00:22:07.098 "num_base_bdevs_discovered": 1, 00:22:07.098 "num_base_bdevs_operational": 3, 00:22:07.098 "base_bdevs_list": [ 00:22:07.098 { 00:22:07.098 "name": "BaseBdev1", 00:22:07.098 "uuid": "40264524-71c9-4545-a96e-f1f507e33557", 00:22:07.098 "is_configured": true, 00:22:07.098 "data_offset": 2048, 00:22:07.098 "data_size": 63488 00:22:07.098 }, 00:22:07.098 { 00:22:07.098 "name": "BaseBdev2", 00:22:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.098 "is_configured": false, 00:22:07.098 "data_offset": 0, 00:22:07.098 "data_size": 0 00:22:07.098 }, 00:22:07.098 { 00:22:07.098 "name": "BaseBdev3", 00:22:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.098 "is_configured": false, 00:22:07.098 "data_offset": 0, 00:22:07.099 "data_size": 0 00:22:07.099 } 00:22:07.099 ] 00:22:07.099 }' 00:22:07.099 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.099 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.666 [2024-11-06 09:15:06.407363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:07.666 [2024-11-06 09:15:06.407419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.666 [2024-11-06 09:15:06.419414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.666 [2024-11-06 09:15:06.421523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.666 [2024-11-06 09:15:06.421692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.666 [2024-11-06 09:15:06.421716] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:07.666 [2024-11-06 09:15:06.421730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:07.666 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.667 "name": "Existed_Raid", 00:22:07.667 "uuid": "36cbf9fa-ad7f-4a27-9549-00420899a134", 00:22:07.667 "strip_size_kb": 64, 00:22:07.667 "state": "configuring", 00:22:07.667 "raid_level": "raid5f", 00:22:07.667 "superblock": true, 00:22:07.667 "num_base_bdevs": 3, 00:22:07.667 "num_base_bdevs_discovered": 1, 00:22:07.667 "num_base_bdevs_operational": 3, 00:22:07.667 "base_bdevs_list": [ 00:22:07.667 { 00:22:07.667 "name": "BaseBdev1", 00:22:07.667 "uuid": "40264524-71c9-4545-a96e-f1f507e33557", 00:22:07.667 "is_configured": true, 00:22:07.667 "data_offset": 2048, 00:22:07.667 "data_size": 63488 00:22:07.667 }, 00:22:07.667 { 00:22:07.667 "name": "BaseBdev2", 00:22:07.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.667 "is_configured": false, 00:22:07.667 "data_offset": 0, 00:22:07.667 "data_size": 0 00:22:07.667 }, 00:22:07.667 { 00:22:07.667 "name": "BaseBdev3", 00:22:07.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.667 "is_configured": false, 00:22:07.667 "data_offset": 0, 00:22:07.667 "data_size": 0 00:22:07.667 } 00:22:07.667 ] 00:22:07.667 }' 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.667 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.927 [2024-11-06 09:15:06.859914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:07.927 BaseBdev2 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.927 [ 00:22:07.927 { 00:22:07.927 "name": "BaseBdev2", 00:22:07.927 "aliases": [ 00:22:07.927 "b966a3ff-fe8b-499d-9e0a-933a63d984e5" 00:22:07.927 ], 00:22:07.927 "product_name": "Malloc disk", 00:22:07.927 "block_size": 512, 00:22:07.927 "num_blocks": 65536, 00:22:07.927 "uuid": "b966a3ff-fe8b-499d-9e0a-933a63d984e5", 00:22:07.927 "assigned_rate_limits": { 00:22:07.927 "rw_ios_per_sec": 0, 00:22:07.927 "rw_mbytes_per_sec": 0, 00:22:07.927 "r_mbytes_per_sec": 0, 00:22:07.927 "w_mbytes_per_sec": 0 00:22:07.927 }, 00:22:07.927 "claimed": true, 00:22:07.927 "claim_type": "exclusive_write", 00:22:07.927 "zoned": false, 00:22:07.927 "supported_io_types": { 00:22:07.927 "read": true, 00:22:07.927 "write": true, 00:22:07.927 "unmap": true, 00:22:07.927 "flush": true, 00:22:07.927 "reset": true, 00:22:07.927 "nvme_admin": false, 00:22:07.927 "nvme_io": false, 00:22:07.927 "nvme_io_md": false, 00:22:07.927 "write_zeroes": true, 00:22:07.927 "zcopy": true, 00:22:07.927 "get_zone_info": false, 00:22:07.927 "zone_management": false, 00:22:07.927 "zone_append": false, 00:22:07.927 "compare": false, 00:22:07.927 "compare_and_write": false, 00:22:07.927 "abort": true, 00:22:07.927 "seek_hole": false, 00:22:07.927 "seek_data": false, 00:22:07.927 "copy": true, 00:22:07.927 "nvme_iov_md": false 00:22:07.927 }, 00:22:07.927 "memory_domains": [ 00:22:07.927 { 00:22:07.927 "dma_device_id": "system", 00:22:07.927 "dma_device_type": 1 00:22:07.927 }, 00:22:07.927 { 00:22:07.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.927 "dma_device_type": 2 00:22:07.927 } 00:22:07.927 ], 00:22:07.927 "driver_specific": {} 00:22:07.927 } 00:22:07.927 ] 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.927 "name": "Existed_Raid", 00:22:07.927 "uuid": "36cbf9fa-ad7f-4a27-9549-00420899a134", 00:22:07.927 "strip_size_kb": 64, 00:22:07.927 "state": "configuring", 00:22:07.927 "raid_level": "raid5f", 00:22:07.927 "superblock": true, 00:22:07.927 "num_base_bdevs": 3, 00:22:07.927 "num_base_bdevs_discovered": 2, 00:22:07.927 "num_base_bdevs_operational": 3, 00:22:07.927 "base_bdevs_list": [ 00:22:07.927 { 00:22:07.927 "name": "BaseBdev1", 00:22:07.927 "uuid": "40264524-71c9-4545-a96e-f1f507e33557", 00:22:07.927 "is_configured": true, 00:22:07.927 "data_offset": 2048, 00:22:07.927 "data_size": 63488 00:22:07.927 }, 00:22:07.927 { 00:22:07.927 "name": "BaseBdev2", 00:22:07.927 "uuid": "b966a3ff-fe8b-499d-9e0a-933a63d984e5", 00:22:07.927 "is_configured": true, 00:22:07.927 "data_offset": 2048, 00:22:07.927 "data_size": 63488 00:22:07.927 }, 00:22:07.927 { 00:22:07.927 "name": "BaseBdev3", 00:22:07.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.927 "is_configured": false, 00:22:07.927 "data_offset": 0, 00:22:07.927 "data_size": 0 00:22:07.927 } 00:22:07.927 ] 00:22:07.927 }' 00:22:07.927 09:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.928 09:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.496 [2024-11-06 09:15:07.336719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.496 [2024-11-06 09:15:07.336992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:08.496 [2024-11-06 09:15:07.337019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:08.496 [2024-11-06 09:15:07.337319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:08.496 BaseBdev3 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.496 [2024-11-06 09:15:07.343071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:08.496 [2024-11-06 09:15:07.343209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:08.496 [2024-11-06 09:15:07.343429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.496 [ 00:22:08.496 { 00:22:08.496 "name": "BaseBdev3", 00:22:08.496 "aliases": [ 00:22:08.496 "71441e13-5fe0-45a3-bb04-b6bf0a341d54" 00:22:08.496 ], 00:22:08.496 "product_name": "Malloc disk", 00:22:08.496 "block_size": 512, 00:22:08.496 "num_blocks": 65536, 00:22:08.496 "uuid": "71441e13-5fe0-45a3-bb04-b6bf0a341d54", 00:22:08.496 "assigned_rate_limits": { 00:22:08.496 "rw_ios_per_sec": 0, 00:22:08.496 "rw_mbytes_per_sec": 0, 00:22:08.496 "r_mbytes_per_sec": 0, 00:22:08.496 "w_mbytes_per_sec": 0 00:22:08.496 }, 00:22:08.496 "claimed": true, 00:22:08.496 "claim_type": "exclusive_write", 00:22:08.496 "zoned": false, 00:22:08.496 "supported_io_types": { 00:22:08.496 "read": true, 00:22:08.496 "write": true, 00:22:08.496 "unmap": true, 00:22:08.496 "flush": true, 00:22:08.496 "reset": true, 00:22:08.496 "nvme_admin": false, 00:22:08.496 "nvme_io": false, 00:22:08.496 "nvme_io_md": false, 00:22:08.496 "write_zeroes": true, 00:22:08.496 "zcopy": true, 00:22:08.496 "get_zone_info": false, 00:22:08.496 "zone_management": false, 00:22:08.496 "zone_append": false, 00:22:08.496 "compare": false, 00:22:08.496 "compare_and_write": false, 00:22:08.496 "abort": true, 00:22:08.496 "seek_hole": false, 00:22:08.496 "seek_data": false, 00:22:08.496 "copy": true, 00:22:08.496 "nvme_iov_md": false 00:22:08.496 }, 00:22:08.496 "memory_domains": [ 00:22:08.496 { 00:22:08.496 "dma_device_id": "system", 00:22:08.496 "dma_device_type": 1 00:22:08.496 }, 00:22:08.496 { 00:22:08.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.496 "dma_device_type": 2 00:22:08.496 } 00:22:08.496 ], 00:22:08.496 "driver_specific": {} 00:22:08.496 } 00:22:08.496 ] 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.496 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.496 "name": "Existed_Raid", 00:22:08.496 "uuid": "36cbf9fa-ad7f-4a27-9549-00420899a134", 00:22:08.496 "strip_size_kb": 64, 00:22:08.496 "state": "online", 00:22:08.496 "raid_level": "raid5f", 00:22:08.496 "superblock": true, 00:22:08.496 "num_base_bdevs": 3, 00:22:08.496 "num_base_bdevs_discovered": 3, 00:22:08.496 "num_base_bdevs_operational": 3, 00:22:08.496 "base_bdevs_list": [ 00:22:08.496 { 00:22:08.496 "name": "BaseBdev1", 00:22:08.496 "uuid": "40264524-71c9-4545-a96e-f1f507e33557", 00:22:08.496 "is_configured": true, 00:22:08.497 "data_offset": 2048, 00:22:08.497 "data_size": 63488 00:22:08.497 }, 00:22:08.497 { 00:22:08.497 "name": "BaseBdev2", 00:22:08.497 "uuid": "b966a3ff-fe8b-499d-9e0a-933a63d984e5", 00:22:08.497 "is_configured": true, 00:22:08.497 "data_offset": 2048, 00:22:08.497 "data_size": 63488 00:22:08.497 }, 00:22:08.497 { 00:22:08.497 "name": "BaseBdev3", 00:22:08.497 "uuid": "71441e13-5fe0-45a3-bb04-b6bf0a341d54", 00:22:08.497 "is_configured": true, 00:22:08.497 "data_offset": 2048, 00:22:08.497 "data_size": 63488 00:22:08.497 } 00:22:08.497 ] 00:22:08.497 }' 00:22:08.497 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.497 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.081 [2024-11-06 09:15:07.825611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.081 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.081 "name": "Existed_Raid", 00:22:09.081 "aliases": [ 00:22:09.081 "36cbf9fa-ad7f-4a27-9549-00420899a134" 00:22:09.081 ], 00:22:09.081 "product_name": "Raid Volume", 00:22:09.081 "block_size": 512, 00:22:09.081 "num_blocks": 126976, 00:22:09.081 "uuid": "36cbf9fa-ad7f-4a27-9549-00420899a134", 00:22:09.081 "assigned_rate_limits": { 00:22:09.081 "rw_ios_per_sec": 0, 00:22:09.081 "rw_mbytes_per_sec": 0, 00:22:09.081 "r_mbytes_per_sec": 0, 00:22:09.081 "w_mbytes_per_sec": 0 00:22:09.081 }, 00:22:09.081 "claimed": false, 00:22:09.081 "zoned": false, 00:22:09.081 "supported_io_types": { 00:22:09.081 "read": true, 00:22:09.081 "write": true, 00:22:09.081 "unmap": false, 00:22:09.081 "flush": false, 00:22:09.081 "reset": true, 00:22:09.081 "nvme_admin": false, 00:22:09.081 "nvme_io": false, 00:22:09.081 "nvme_io_md": false, 00:22:09.081 "write_zeroes": true, 00:22:09.081 "zcopy": false, 00:22:09.081 "get_zone_info": false, 00:22:09.081 "zone_management": false, 00:22:09.081 "zone_append": false, 00:22:09.081 "compare": false, 00:22:09.081 "compare_and_write": false, 00:22:09.081 "abort": false, 00:22:09.081 "seek_hole": false, 00:22:09.081 "seek_data": false, 00:22:09.081 "copy": false, 00:22:09.081 "nvme_iov_md": false 00:22:09.081 }, 00:22:09.082 "driver_specific": { 00:22:09.082 "raid": { 00:22:09.082 "uuid": "36cbf9fa-ad7f-4a27-9549-00420899a134", 00:22:09.082 "strip_size_kb": 64, 00:22:09.082 "state": "online", 00:22:09.082 "raid_level": "raid5f", 00:22:09.082 "superblock": true, 00:22:09.082 "num_base_bdevs": 3, 00:22:09.082 "num_base_bdevs_discovered": 3, 00:22:09.082 "num_base_bdevs_operational": 3, 00:22:09.082 "base_bdevs_list": [ 00:22:09.082 { 00:22:09.082 "name": "BaseBdev1", 00:22:09.082 "uuid": "40264524-71c9-4545-a96e-f1f507e33557", 00:22:09.082 "is_configured": true, 00:22:09.082 "data_offset": 2048, 00:22:09.082 "data_size": 63488 00:22:09.082 }, 00:22:09.082 { 00:22:09.082 "name": "BaseBdev2", 00:22:09.082 "uuid": "b966a3ff-fe8b-499d-9e0a-933a63d984e5", 00:22:09.082 "is_configured": true, 00:22:09.082 "data_offset": 2048, 00:22:09.082 "data_size": 63488 00:22:09.082 }, 00:22:09.082 { 00:22:09.082 "name": "BaseBdev3", 00:22:09.082 "uuid": "71441e13-5fe0-45a3-bb04-b6bf0a341d54", 00:22:09.082 "is_configured": true, 00:22:09.082 "data_offset": 2048, 00:22:09.082 "data_size": 63488 00:22:09.082 } 00:22:09.082 ] 00:22:09.082 } 00:22:09.082 } 00:22:09.082 }' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:09.082 BaseBdev2 00:22:09.082 BaseBdev3' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.082 09:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.082 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.082 [2024-11-06 09:15:08.077445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.342 "name": "Existed_Raid", 00:22:09.342 "uuid": "36cbf9fa-ad7f-4a27-9549-00420899a134", 00:22:09.342 "strip_size_kb": 64, 00:22:09.342 "state": "online", 00:22:09.342 "raid_level": "raid5f", 00:22:09.342 "superblock": true, 00:22:09.342 "num_base_bdevs": 3, 00:22:09.342 "num_base_bdevs_discovered": 2, 00:22:09.342 "num_base_bdevs_operational": 2, 00:22:09.342 "base_bdevs_list": [ 00:22:09.342 { 00:22:09.342 "name": null, 00:22:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.342 "is_configured": false, 00:22:09.342 "data_offset": 0, 00:22:09.342 "data_size": 63488 00:22:09.342 }, 00:22:09.342 { 00:22:09.342 "name": "BaseBdev2", 00:22:09.342 "uuid": "b966a3ff-fe8b-499d-9e0a-933a63d984e5", 00:22:09.342 "is_configured": true, 00:22:09.342 "data_offset": 2048, 00:22:09.342 "data_size": 63488 00:22:09.342 }, 00:22:09.342 { 00:22:09.342 "name": "BaseBdev3", 00:22:09.342 "uuid": "71441e13-5fe0-45a3-bb04-b6bf0a341d54", 00:22:09.342 "is_configured": true, 00:22:09.342 "data_offset": 2048, 00:22:09.342 "data_size": 63488 00:22:09.342 } 00:22:09.342 ] 00:22:09.342 }' 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.342 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.601 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.601 [2024-11-06 09:15:08.617497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:09.601 [2024-11-06 09:15:08.617635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.860 [2024-11-06 09:15:08.714217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.860 [2024-11-06 09:15:08.770188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:09.860 [2024-11-06 09:15:08.770241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.860 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:09.861 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.120 BaseBdev2 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.120 09:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.120 [ 00:22:10.120 { 00:22:10.120 "name": "BaseBdev2", 00:22:10.120 "aliases": [ 00:22:10.120 "5b46d6db-c219-4dff-b768-e90bfc40a8bc" 00:22:10.120 ], 00:22:10.120 "product_name": "Malloc disk", 00:22:10.120 "block_size": 512, 00:22:10.120 "num_blocks": 65536, 00:22:10.120 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:10.120 "assigned_rate_limits": { 00:22:10.120 "rw_ios_per_sec": 0, 00:22:10.120 "rw_mbytes_per_sec": 0, 00:22:10.120 "r_mbytes_per_sec": 0, 00:22:10.120 "w_mbytes_per_sec": 0 00:22:10.120 }, 00:22:10.120 "claimed": false, 00:22:10.120 "zoned": false, 00:22:10.120 "supported_io_types": { 00:22:10.120 "read": true, 00:22:10.120 "write": true, 00:22:10.120 "unmap": true, 00:22:10.120 "flush": true, 00:22:10.120 "reset": true, 00:22:10.120 "nvme_admin": false, 00:22:10.120 "nvme_io": false, 00:22:10.120 "nvme_io_md": false, 00:22:10.120 "write_zeroes": true, 00:22:10.120 "zcopy": true, 00:22:10.120 "get_zone_info": false, 00:22:10.120 "zone_management": false, 00:22:10.120 "zone_append": false, 00:22:10.120 "compare": false, 00:22:10.120 "compare_and_write": false, 00:22:10.120 "abort": true, 00:22:10.120 "seek_hole": false, 00:22:10.120 "seek_data": false, 00:22:10.120 "copy": true, 00:22:10.120 "nvme_iov_md": false 00:22:10.120 }, 00:22:10.120 "memory_domains": [ 00:22:10.120 { 00:22:10.120 "dma_device_id": "system", 00:22:10.120 "dma_device_type": 1 00:22:10.120 }, 00:22:10.120 { 00:22:10.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.120 "dma_device_type": 2 00:22:10.120 } 00:22:10.120 ], 00:22:10.120 "driver_specific": {} 00:22:10.120 } 00:22:10.120 ] 00:22:10.120 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.120 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:10.120 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.121 BaseBdev3 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.121 [ 00:22:10.121 { 00:22:10.121 "name": "BaseBdev3", 00:22:10.121 "aliases": [ 00:22:10.121 "27a4ed52-e26b-455b-aa90-a6f21b939273" 00:22:10.121 ], 00:22:10.121 "product_name": "Malloc disk", 00:22:10.121 "block_size": 512, 00:22:10.121 "num_blocks": 65536, 00:22:10.121 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:10.121 "assigned_rate_limits": { 00:22:10.121 "rw_ios_per_sec": 0, 00:22:10.121 "rw_mbytes_per_sec": 0, 00:22:10.121 "r_mbytes_per_sec": 0, 00:22:10.121 "w_mbytes_per_sec": 0 00:22:10.121 }, 00:22:10.121 "claimed": false, 00:22:10.121 "zoned": false, 00:22:10.121 "supported_io_types": { 00:22:10.121 "read": true, 00:22:10.121 "write": true, 00:22:10.121 "unmap": true, 00:22:10.121 "flush": true, 00:22:10.121 "reset": true, 00:22:10.121 "nvme_admin": false, 00:22:10.121 "nvme_io": false, 00:22:10.121 "nvme_io_md": false, 00:22:10.121 "write_zeroes": true, 00:22:10.121 "zcopy": true, 00:22:10.121 "get_zone_info": false, 00:22:10.121 "zone_management": false, 00:22:10.121 "zone_append": false, 00:22:10.121 "compare": false, 00:22:10.121 "compare_and_write": false, 00:22:10.121 "abort": true, 00:22:10.121 "seek_hole": false, 00:22:10.121 "seek_data": false, 00:22:10.121 "copy": true, 00:22:10.121 "nvme_iov_md": false 00:22:10.121 }, 00:22:10.121 "memory_domains": [ 00:22:10.121 { 00:22:10.121 "dma_device_id": "system", 00:22:10.121 "dma_device_type": 1 00:22:10.121 }, 00:22:10.121 { 00:22:10.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.121 "dma_device_type": 2 00:22:10.121 } 00:22:10.121 ], 00:22:10.121 "driver_specific": {} 00:22:10.121 } 00:22:10.121 ] 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.121 [2024-11-06 09:15:09.103229] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:10.121 [2024-11-06 09:15:09.103298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:10.121 [2024-11-06 09:15:09.103326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.121 [2024-11-06 09:15:09.105441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.121 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.379 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.379 "name": "Existed_Raid", 00:22:10.379 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:10.379 "strip_size_kb": 64, 00:22:10.379 "state": "configuring", 00:22:10.379 "raid_level": "raid5f", 00:22:10.380 "superblock": true, 00:22:10.380 "num_base_bdevs": 3, 00:22:10.380 "num_base_bdevs_discovered": 2, 00:22:10.380 "num_base_bdevs_operational": 3, 00:22:10.380 "base_bdevs_list": [ 00:22:10.380 { 00:22:10.380 "name": "BaseBdev1", 00:22:10.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.380 "is_configured": false, 00:22:10.380 "data_offset": 0, 00:22:10.380 "data_size": 0 00:22:10.380 }, 00:22:10.380 { 00:22:10.380 "name": "BaseBdev2", 00:22:10.380 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:10.380 "is_configured": true, 00:22:10.380 "data_offset": 2048, 00:22:10.380 "data_size": 63488 00:22:10.380 }, 00:22:10.380 { 00:22:10.380 "name": "BaseBdev3", 00:22:10.380 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:10.380 "is_configured": true, 00:22:10.380 "data_offset": 2048, 00:22:10.380 "data_size": 63488 00:22:10.380 } 00:22:10.380 ] 00:22:10.380 }' 00:22:10.380 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.380 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.639 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.640 [2024-11-06 09:15:09.478647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.640 "name": "Existed_Raid", 00:22:10.640 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:10.640 "strip_size_kb": 64, 00:22:10.640 "state": "configuring", 00:22:10.640 "raid_level": "raid5f", 00:22:10.640 "superblock": true, 00:22:10.640 "num_base_bdevs": 3, 00:22:10.640 "num_base_bdevs_discovered": 1, 00:22:10.640 "num_base_bdevs_operational": 3, 00:22:10.640 "base_bdevs_list": [ 00:22:10.640 { 00:22:10.640 "name": "BaseBdev1", 00:22:10.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.640 "is_configured": false, 00:22:10.640 "data_offset": 0, 00:22:10.640 "data_size": 0 00:22:10.640 }, 00:22:10.640 { 00:22:10.640 "name": null, 00:22:10.640 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:10.640 "is_configured": false, 00:22:10.640 "data_offset": 0, 00:22:10.640 "data_size": 63488 00:22:10.640 }, 00:22:10.640 { 00:22:10.640 "name": "BaseBdev3", 00:22:10.640 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:10.640 "is_configured": true, 00:22:10.640 "data_offset": 2048, 00:22:10.640 "data_size": 63488 00:22:10.640 } 00:22:10.640 ] 00:22:10.640 }' 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.640 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.899 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.899 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.899 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.899 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:10.899 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.158 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:11.158 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:11.158 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.158 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.158 [2024-11-06 09:15:09.999473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.158 BaseBdev1 00:22:11.158 09:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.158 09:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.158 [ 00:22:11.158 { 00:22:11.158 "name": "BaseBdev1", 00:22:11.158 "aliases": [ 00:22:11.158 "53aded11-ef22-4986-8332-c1ba64082538" 00:22:11.158 ], 00:22:11.158 "product_name": "Malloc disk", 00:22:11.158 "block_size": 512, 00:22:11.158 "num_blocks": 65536, 00:22:11.158 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:11.158 "assigned_rate_limits": { 00:22:11.158 "rw_ios_per_sec": 0, 00:22:11.158 "rw_mbytes_per_sec": 0, 00:22:11.158 "r_mbytes_per_sec": 0, 00:22:11.158 "w_mbytes_per_sec": 0 00:22:11.158 }, 00:22:11.158 "claimed": true, 00:22:11.158 "claim_type": "exclusive_write", 00:22:11.158 "zoned": false, 00:22:11.158 "supported_io_types": { 00:22:11.158 "read": true, 00:22:11.158 "write": true, 00:22:11.158 "unmap": true, 00:22:11.158 "flush": true, 00:22:11.158 "reset": true, 00:22:11.158 "nvme_admin": false, 00:22:11.158 "nvme_io": false, 00:22:11.158 "nvme_io_md": false, 00:22:11.158 "write_zeroes": true, 00:22:11.158 "zcopy": true, 00:22:11.158 "get_zone_info": false, 00:22:11.158 "zone_management": false, 00:22:11.158 "zone_append": false, 00:22:11.158 "compare": false, 00:22:11.158 "compare_and_write": false, 00:22:11.158 "abort": true, 00:22:11.158 "seek_hole": false, 00:22:11.158 "seek_data": false, 00:22:11.158 "copy": true, 00:22:11.158 "nvme_iov_md": false 00:22:11.158 }, 00:22:11.158 "memory_domains": [ 00:22:11.158 { 00:22:11.158 "dma_device_id": "system", 00:22:11.158 "dma_device_type": 1 00:22:11.158 }, 00:22:11.158 { 00:22:11.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.158 "dma_device_type": 2 00:22:11.158 } 00:22:11.158 ], 00:22:11.158 "driver_specific": {} 00:22:11.158 } 00:22:11.158 ] 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.158 "name": "Existed_Raid", 00:22:11.158 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:11.158 "strip_size_kb": 64, 00:22:11.158 "state": "configuring", 00:22:11.158 "raid_level": "raid5f", 00:22:11.158 "superblock": true, 00:22:11.158 "num_base_bdevs": 3, 00:22:11.158 "num_base_bdevs_discovered": 2, 00:22:11.158 "num_base_bdevs_operational": 3, 00:22:11.158 "base_bdevs_list": [ 00:22:11.158 { 00:22:11.158 "name": "BaseBdev1", 00:22:11.158 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:11.158 "is_configured": true, 00:22:11.158 "data_offset": 2048, 00:22:11.158 "data_size": 63488 00:22:11.158 }, 00:22:11.158 { 00:22:11.158 "name": null, 00:22:11.158 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:11.158 "is_configured": false, 00:22:11.158 "data_offset": 0, 00:22:11.158 "data_size": 63488 00:22:11.158 }, 00:22:11.158 { 00:22:11.158 "name": "BaseBdev3", 00:22:11.158 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:11.158 "is_configured": true, 00:22:11.158 "data_offset": 2048, 00:22:11.158 "data_size": 63488 00:22:11.158 } 00:22:11.158 ] 00:22:11.158 }' 00:22:11.158 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.159 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.417 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.417 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.417 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:11.417 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.677 [2024-11-06 09:15:10.499028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.677 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.677 "name": "Existed_Raid", 00:22:11.677 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:11.677 "strip_size_kb": 64, 00:22:11.677 "state": "configuring", 00:22:11.677 "raid_level": "raid5f", 00:22:11.677 "superblock": true, 00:22:11.677 "num_base_bdevs": 3, 00:22:11.677 "num_base_bdevs_discovered": 1, 00:22:11.677 "num_base_bdevs_operational": 3, 00:22:11.677 "base_bdevs_list": [ 00:22:11.677 { 00:22:11.677 "name": "BaseBdev1", 00:22:11.677 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:11.677 "is_configured": true, 00:22:11.677 "data_offset": 2048, 00:22:11.677 "data_size": 63488 00:22:11.677 }, 00:22:11.677 { 00:22:11.677 "name": null, 00:22:11.677 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:11.677 "is_configured": false, 00:22:11.677 "data_offset": 0, 00:22:11.677 "data_size": 63488 00:22:11.677 }, 00:22:11.677 { 00:22:11.677 "name": null, 00:22:11.678 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:11.678 "is_configured": false, 00:22:11.678 "data_offset": 0, 00:22:11.678 "data_size": 63488 00:22:11.678 } 00:22:11.678 ] 00:22:11.678 }' 00:22:11.678 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.678 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.939 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.939 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:11.939 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.939 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.202 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.202 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:12.202 09:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:12.202 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.202 09:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.202 [2024-11-06 09:15:11.006407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.202 "name": "Existed_Raid", 00:22:12.202 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:12.202 "strip_size_kb": 64, 00:22:12.202 "state": "configuring", 00:22:12.202 "raid_level": "raid5f", 00:22:12.202 "superblock": true, 00:22:12.202 "num_base_bdevs": 3, 00:22:12.202 "num_base_bdevs_discovered": 2, 00:22:12.202 "num_base_bdevs_operational": 3, 00:22:12.202 "base_bdevs_list": [ 00:22:12.202 { 00:22:12.202 "name": "BaseBdev1", 00:22:12.202 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:12.202 "is_configured": true, 00:22:12.202 "data_offset": 2048, 00:22:12.202 "data_size": 63488 00:22:12.202 }, 00:22:12.202 { 00:22:12.202 "name": null, 00:22:12.202 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:12.202 "is_configured": false, 00:22:12.202 "data_offset": 0, 00:22:12.202 "data_size": 63488 00:22:12.202 }, 00:22:12.202 { 00:22:12.202 "name": "BaseBdev3", 00:22:12.202 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:12.202 "is_configured": true, 00:22:12.202 "data_offset": 2048, 00:22:12.202 "data_size": 63488 00:22:12.202 } 00:22:12.202 ] 00:22:12.202 }' 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.202 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.462 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.462 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:12.462 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.462 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.462 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.721 [2024-11-06 09:15:11.526229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.721 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.722 "name": "Existed_Raid", 00:22:12.722 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:12.722 "strip_size_kb": 64, 00:22:12.722 "state": "configuring", 00:22:12.722 "raid_level": "raid5f", 00:22:12.722 "superblock": true, 00:22:12.722 "num_base_bdevs": 3, 00:22:12.722 "num_base_bdevs_discovered": 1, 00:22:12.722 "num_base_bdevs_operational": 3, 00:22:12.722 "base_bdevs_list": [ 00:22:12.722 { 00:22:12.722 "name": null, 00:22:12.722 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:12.722 "is_configured": false, 00:22:12.722 "data_offset": 0, 00:22:12.722 "data_size": 63488 00:22:12.722 }, 00:22:12.722 { 00:22:12.722 "name": null, 00:22:12.722 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:12.722 "is_configured": false, 00:22:12.722 "data_offset": 0, 00:22:12.722 "data_size": 63488 00:22:12.722 }, 00:22:12.722 { 00:22:12.722 "name": "BaseBdev3", 00:22:12.722 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:12.722 "is_configured": true, 00:22:12.722 "data_offset": 2048, 00:22:12.722 "data_size": 63488 00:22:12.722 } 00:22:12.722 ] 00:22:12.722 }' 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.722 09:15:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.290 [2024-11-06 09:15:12.094678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.290 "name": "Existed_Raid", 00:22:13.290 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:13.290 "strip_size_kb": 64, 00:22:13.290 "state": "configuring", 00:22:13.290 "raid_level": "raid5f", 00:22:13.290 "superblock": true, 00:22:13.290 "num_base_bdevs": 3, 00:22:13.290 "num_base_bdevs_discovered": 2, 00:22:13.290 "num_base_bdevs_operational": 3, 00:22:13.290 "base_bdevs_list": [ 00:22:13.290 { 00:22:13.290 "name": null, 00:22:13.290 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:13.290 "is_configured": false, 00:22:13.290 "data_offset": 0, 00:22:13.290 "data_size": 63488 00:22:13.290 }, 00:22:13.290 { 00:22:13.290 "name": "BaseBdev2", 00:22:13.290 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:13.290 "is_configured": true, 00:22:13.290 "data_offset": 2048, 00:22:13.290 "data_size": 63488 00:22:13.290 }, 00:22:13.290 { 00:22:13.290 "name": "BaseBdev3", 00:22:13.290 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:13.290 "is_configured": true, 00:22:13.290 "data_offset": 2048, 00:22:13.290 "data_size": 63488 00:22:13.290 } 00:22:13.290 ] 00:22:13.290 }' 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.290 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.550 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 53aded11-ef22-4986-8332-c1ba64082538 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.809 [2024-11-06 09:15:12.631987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.809 [2024-11-06 09:15:12.632403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:13.809 [2024-11-06 09:15:12.632430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:13.809 [2024-11-06 09:15:12.632690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:13.809 NewBaseBdev 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.809 [2024-11-06 09:15:12.638190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:13.809 [2024-11-06 09:15:12.638212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:13.809 [2024-11-06 09:15:12.638505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.809 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.809 [ 00:22:13.809 { 00:22:13.809 "name": "NewBaseBdev", 00:22:13.809 "aliases": [ 00:22:13.809 "53aded11-ef22-4986-8332-c1ba64082538" 00:22:13.809 ], 00:22:13.809 "product_name": "Malloc disk", 00:22:13.809 "block_size": 512, 00:22:13.809 "num_blocks": 65536, 00:22:13.809 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:13.809 "assigned_rate_limits": { 00:22:13.809 "rw_ios_per_sec": 0, 00:22:13.809 "rw_mbytes_per_sec": 0, 00:22:13.809 "r_mbytes_per_sec": 0, 00:22:13.809 "w_mbytes_per_sec": 0 00:22:13.809 }, 00:22:13.809 "claimed": true, 00:22:13.809 "claim_type": "exclusive_write", 00:22:13.809 "zoned": false, 00:22:13.809 "supported_io_types": { 00:22:13.809 "read": true, 00:22:13.809 "write": true, 00:22:13.809 "unmap": true, 00:22:13.809 "flush": true, 00:22:13.809 "reset": true, 00:22:13.809 "nvme_admin": false, 00:22:13.809 "nvme_io": false, 00:22:13.809 "nvme_io_md": false, 00:22:13.809 "write_zeroes": true, 00:22:13.809 "zcopy": true, 00:22:13.809 "get_zone_info": false, 00:22:13.809 "zone_management": false, 00:22:13.809 "zone_append": false, 00:22:13.809 "compare": false, 00:22:13.809 "compare_and_write": false, 00:22:13.809 "abort": true, 00:22:13.810 "seek_hole": false, 00:22:13.810 "seek_data": false, 00:22:13.810 "copy": true, 00:22:13.810 "nvme_iov_md": false 00:22:13.810 }, 00:22:13.810 "memory_domains": [ 00:22:13.810 { 00:22:13.810 "dma_device_id": "system", 00:22:13.810 "dma_device_type": 1 00:22:13.810 }, 00:22:13.810 { 00:22:13.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.810 "dma_device_type": 2 00:22:13.810 } 00:22:13.810 ], 00:22:13.810 "driver_specific": {} 00:22:13.810 } 00:22:13.810 ] 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.810 "name": "Existed_Raid", 00:22:13.810 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:13.810 "strip_size_kb": 64, 00:22:13.810 "state": "online", 00:22:13.810 "raid_level": "raid5f", 00:22:13.810 "superblock": true, 00:22:13.810 "num_base_bdevs": 3, 00:22:13.810 "num_base_bdevs_discovered": 3, 00:22:13.810 "num_base_bdevs_operational": 3, 00:22:13.810 "base_bdevs_list": [ 00:22:13.810 { 00:22:13.810 "name": "NewBaseBdev", 00:22:13.810 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:13.810 "is_configured": true, 00:22:13.810 "data_offset": 2048, 00:22:13.810 "data_size": 63488 00:22:13.810 }, 00:22:13.810 { 00:22:13.810 "name": "BaseBdev2", 00:22:13.810 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:13.810 "is_configured": true, 00:22:13.810 "data_offset": 2048, 00:22:13.810 "data_size": 63488 00:22:13.810 }, 00:22:13.810 { 00:22:13.810 "name": "BaseBdev3", 00:22:13.810 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:13.810 "is_configured": true, 00:22:13.810 "data_offset": 2048, 00:22:13.810 "data_size": 63488 00:22:13.810 } 00:22:13.810 ] 00:22:13.810 }' 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.810 09:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.069 [2024-11-06 09:15:13.048599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.069 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.069 "name": "Existed_Raid", 00:22:14.069 "aliases": [ 00:22:14.069 "20523226-0c93-4b51-b3af-21aa020e260f" 00:22:14.069 ], 00:22:14.069 "product_name": "Raid Volume", 00:22:14.069 "block_size": 512, 00:22:14.069 "num_blocks": 126976, 00:22:14.069 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:14.069 "assigned_rate_limits": { 00:22:14.069 "rw_ios_per_sec": 0, 00:22:14.069 "rw_mbytes_per_sec": 0, 00:22:14.069 "r_mbytes_per_sec": 0, 00:22:14.069 "w_mbytes_per_sec": 0 00:22:14.069 }, 00:22:14.069 "claimed": false, 00:22:14.069 "zoned": false, 00:22:14.069 "supported_io_types": { 00:22:14.069 "read": true, 00:22:14.069 "write": true, 00:22:14.069 "unmap": false, 00:22:14.069 "flush": false, 00:22:14.069 "reset": true, 00:22:14.069 "nvme_admin": false, 00:22:14.069 "nvme_io": false, 00:22:14.069 "nvme_io_md": false, 00:22:14.069 "write_zeroes": true, 00:22:14.069 "zcopy": false, 00:22:14.069 "get_zone_info": false, 00:22:14.069 "zone_management": false, 00:22:14.069 "zone_append": false, 00:22:14.069 "compare": false, 00:22:14.069 "compare_and_write": false, 00:22:14.069 "abort": false, 00:22:14.069 "seek_hole": false, 00:22:14.069 "seek_data": false, 00:22:14.069 "copy": false, 00:22:14.069 "nvme_iov_md": false 00:22:14.069 }, 00:22:14.069 "driver_specific": { 00:22:14.069 "raid": { 00:22:14.069 "uuid": "20523226-0c93-4b51-b3af-21aa020e260f", 00:22:14.069 "strip_size_kb": 64, 00:22:14.069 "state": "online", 00:22:14.069 "raid_level": "raid5f", 00:22:14.069 "superblock": true, 00:22:14.069 "num_base_bdevs": 3, 00:22:14.069 "num_base_bdevs_discovered": 3, 00:22:14.069 "num_base_bdevs_operational": 3, 00:22:14.069 "base_bdevs_list": [ 00:22:14.069 { 00:22:14.069 "name": "NewBaseBdev", 00:22:14.069 "uuid": "53aded11-ef22-4986-8332-c1ba64082538", 00:22:14.069 "is_configured": true, 00:22:14.069 "data_offset": 2048, 00:22:14.069 "data_size": 63488 00:22:14.069 }, 00:22:14.069 { 00:22:14.069 "name": "BaseBdev2", 00:22:14.069 "uuid": "5b46d6db-c219-4dff-b768-e90bfc40a8bc", 00:22:14.069 "is_configured": true, 00:22:14.069 "data_offset": 2048, 00:22:14.069 "data_size": 63488 00:22:14.069 }, 00:22:14.069 { 00:22:14.069 "name": "BaseBdev3", 00:22:14.070 "uuid": "27a4ed52-e26b-455b-aa90-a6f21b939273", 00:22:14.070 "is_configured": true, 00:22:14.070 "data_offset": 2048, 00:22:14.070 "data_size": 63488 00:22:14.070 } 00:22:14.070 ] 00:22:14.070 } 00:22:14.070 } 00:22:14.070 }' 00:22:14.070 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:14.329 BaseBdev2 00:22:14.329 BaseBdev3' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.329 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.330 [2024-11-06 09:15:13.308417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.330 [2024-11-06 09:15:13.308447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.330 [2024-11-06 09:15:13.308528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.330 [2024-11-06 09:15:13.308806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.330 [2024-11-06 09:15:13.308822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80222 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80222 ']' 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80222 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80222 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:14.330 killing process with pid 80222 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80222' 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80222 00:22:14.330 [2024-11-06 09:15:13.359548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.330 09:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80222 00:22:14.897 [2024-11-06 09:15:13.664321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.834 09:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:15.834 00:22:15.834 real 0m10.274s 00:22:15.834 user 0m16.234s 00:22:15.834 sys 0m2.098s 00:22:15.834 09:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:15.834 09:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.834 ************************************ 00:22:15.834 END TEST raid5f_state_function_test_sb 00:22:15.834 ************************************ 00:22:15.834 09:15:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:15.834 09:15:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:15.834 09:15:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:15.834 09:15:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.834 ************************************ 00:22:15.834 START TEST raid5f_superblock_test 00:22:15.834 ************************************ 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80837 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80837 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 80837 ']' 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:15.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:15.834 09:15:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.094 [2024-11-06 09:15:14.961413] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:22:16.094 [2024-11-06 09:15:14.961546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80837 ] 00:22:16.353 [2024-11-06 09:15:15.136251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.353 [2024-11-06 09:15:15.252719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.611 [2024-11-06 09:15:15.465152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.611 [2024-11-06 09:15:15.465189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.870 malloc1 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.870 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.870 [2024-11-06 09:15:15.904680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.870 [2024-11-06 09:15:15.904889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.870 [2024-11-06 09:15:15.904953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:16.870 [2024-11-06 09:15:15.905048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.870 [2024-11-06 09:15:15.907561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.870 [2024-11-06 09:15:15.907708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.870 pt1 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 malloc2 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 [2024-11-06 09:15:15.964569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.130 [2024-11-06 09:15:15.964626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.130 [2024-11-06 09:15:15.964651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:17.130 [2024-11-06 09:15:15.964662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.130 [2024-11-06 09:15:15.967024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.130 [2024-11-06 09:15:15.967064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.130 pt2 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.130 09:15:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 malloc3 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 [2024-11-06 09:15:16.032520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:17.130 [2024-11-06 09:15:16.032678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.130 [2024-11-06 09:15:16.032737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:17.130 [2024-11-06 09:15:16.032836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.130 [2024-11-06 09:15:16.035232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.130 [2024-11-06 09:15:16.035384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.130 pt3 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 [2024-11-06 09:15:16.044571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.130 [2024-11-06 09:15:16.046728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.130 [2024-11-06 09:15:16.046790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:17.130 [2024-11-06 09:15:16.046963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:17.130 [2024-11-06 09:15:16.046983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:17.130 [2024-11-06 09:15:16.047238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:17.130 [2024-11-06 09:15:16.052616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:17.130 [2024-11-06 09:15:16.052636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:17.130 [2024-11-06 09:15:16.052839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.130 "name": "raid_bdev1", 00:22:17.130 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:17.130 "strip_size_kb": 64, 00:22:17.130 "state": "online", 00:22:17.130 "raid_level": "raid5f", 00:22:17.130 "superblock": true, 00:22:17.130 "num_base_bdevs": 3, 00:22:17.130 "num_base_bdevs_discovered": 3, 00:22:17.130 "num_base_bdevs_operational": 3, 00:22:17.130 "base_bdevs_list": [ 00:22:17.130 { 00:22:17.130 "name": "pt1", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 }, 00:22:17.130 { 00:22:17.130 "name": "pt2", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 }, 00:22:17.130 { 00:22:17.130 "name": "pt3", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 } 00:22:17.130 ] 00:22:17.130 }' 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.130 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.718 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.719 [2024-11-06 09:15:16.478871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.719 "name": "raid_bdev1", 00:22:17.719 "aliases": [ 00:22:17.719 "4cafa66c-f99d-44a8-815f-45739552e6da" 00:22:17.719 ], 00:22:17.719 "product_name": "Raid Volume", 00:22:17.719 "block_size": 512, 00:22:17.719 "num_blocks": 126976, 00:22:17.719 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:17.719 "assigned_rate_limits": { 00:22:17.719 "rw_ios_per_sec": 0, 00:22:17.719 "rw_mbytes_per_sec": 0, 00:22:17.719 "r_mbytes_per_sec": 0, 00:22:17.719 "w_mbytes_per_sec": 0 00:22:17.719 }, 00:22:17.719 "claimed": false, 00:22:17.719 "zoned": false, 00:22:17.719 "supported_io_types": { 00:22:17.719 "read": true, 00:22:17.719 "write": true, 00:22:17.719 "unmap": false, 00:22:17.719 "flush": false, 00:22:17.719 "reset": true, 00:22:17.719 "nvme_admin": false, 00:22:17.719 "nvme_io": false, 00:22:17.719 "nvme_io_md": false, 00:22:17.719 "write_zeroes": true, 00:22:17.719 "zcopy": false, 00:22:17.719 "get_zone_info": false, 00:22:17.719 "zone_management": false, 00:22:17.719 "zone_append": false, 00:22:17.719 "compare": false, 00:22:17.719 "compare_and_write": false, 00:22:17.719 "abort": false, 00:22:17.719 "seek_hole": false, 00:22:17.719 "seek_data": false, 00:22:17.719 "copy": false, 00:22:17.719 "nvme_iov_md": false 00:22:17.719 }, 00:22:17.719 "driver_specific": { 00:22:17.719 "raid": { 00:22:17.719 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:17.719 "strip_size_kb": 64, 00:22:17.719 "state": "online", 00:22:17.719 "raid_level": "raid5f", 00:22:17.719 "superblock": true, 00:22:17.719 "num_base_bdevs": 3, 00:22:17.719 "num_base_bdevs_discovered": 3, 00:22:17.719 "num_base_bdevs_operational": 3, 00:22:17.719 "base_bdevs_list": [ 00:22:17.719 { 00:22:17.719 "name": "pt1", 00:22:17.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.719 "is_configured": true, 00:22:17.719 "data_offset": 2048, 00:22:17.719 "data_size": 63488 00:22:17.719 }, 00:22:17.719 { 00:22:17.719 "name": "pt2", 00:22:17.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.719 "is_configured": true, 00:22:17.719 "data_offset": 2048, 00:22:17.719 "data_size": 63488 00:22:17.719 }, 00:22:17.719 { 00:22:17.719 "name": "pt3", 00:22:17.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.719 "is_configured": true, 00:22:17.719 "data_offset": 2048, 00:22:17.719 "data_size": 63488 00:22:17.719 } 00:22:17.719 ] 00:22:17.719 } 00:22:17.719 } 00:22:17.719 }' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:17.719 pt2 00:22:17.719 pt3' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.719 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:17.719 [2024-11-06 09:15:16.746558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4cafa66c-f99d-44a8-815f-45739552e6da 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4cafa66c-f99d-44a8-815f-45739552e6da ']' 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.979 [2024-11-06 09:15:16.794378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.979 [2024-11-06 09:15:16.794406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.979 [2024-11-06 09:15:16.794475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.979 [2024-11-06 09:15:16.794550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.979 [2024-11-06 09:15:16.794561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:17.979 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.980 [2024-11-06 09:15:16.938343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:17.980 [2024-11-06 09:15:16.940499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:17.980 [2024-11-06 09:15:16.940549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:17.980 [2024-11-06 09:15:16.940607] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:17.980 [2024-11-06 09:15:16.940665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:17.980 [2024-11-06 09:15:16.940687] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:17.980 [2024-11-06 09:15:16.940708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.980 [2024-11-06 09:15:16.940719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:17.980 request: 00:22:17.980 { 00:22:17.980 "name": "raid_bdev1", 00:22:17.980 "raid_level": "raid5f", 00:22:17.980 "base_bdevs": [ 00:22:17.980 "malloc1", 00:22:17.980 "malloc2", 00:22:17.980 "malloc3" 00:22:17.980 ], 00:22:17.980 "strip_size_kb": 64, 00:22:17.980 "superblock": false, 00:22:17.980 "method": "bdev_raid_create", 00:22:17.980 "req_id": 1 00:22:17.980 } 00:22:17.980 Got JSON-RPC error response 00:22:17.980 response: 00:22:17.980 { 00:22:17.980 "code": -17, 00:22:17.980 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:17.980 } 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.980 09:15:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.980 [2024-11-06 09:15:16.998198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.980 [2024-11-06 09:15:16.998383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.980 [2024-11-06 09:15:16.998444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:17.980 [2024-11-06 09:15:16.998553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.980 [2024-11-06 09:15:17.001046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.980 [2024-11-06 09:15:17.001181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.980 [2024-11-06 09:15:17.001366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.980 [2024-11-06 09:15:17.001515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.980 pt1 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.980 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.239 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.239 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.239 "name": "raid_bdev1", 00:22:18.239 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:18.239 "strip_size_kb": 64, 00:22:18.239 "state": "configuring", 00:22:18.239 "raid_level": "raid5f", 00:22:18.239 "superblock": true, 00:22:18.239 "num_base_bdevs": 3, 00:22:18.239 "num_base_bdevs_discovered": 1, 00:22:18.239 "num_base_bdevs_operational": 3, 00:22:18.239 "base_bdevs_list": [ 00:22:18.239 { 00:22:18.239 "name": "pt1", 00:22:18.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.239 "is_configured": true, 00:22:18.239 "data_offset": 2048, 00:22:18.239 "data_size": 63488 00:22:18.239 }, 00:22:18.239 { 00:22:18.239 "name": null, 00:22:18.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.239 "is_configured": false, 00:22:18.239 "data_offset": 2048, 00:22:18.239 "data_size": 63488 00:22:18.239 }, 00:22:18.239 { 00:22:18.239 "name": null, 00:22:18.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.239 "is_configured": false, 00:22:18.239 "data_offset": 2048, 00:22:18.239 "data_size": 63488 00:22:18.239 } 00:22:18.239 ] 00:22:18.239 }' 00:22:18.239 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.239 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.499 [2024-11-06 09:15:17.417597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.499 [2024-11-06 09:15:17.417663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.499 [2024-11-06 09:15:17.417687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:18.499 [2024-11-06 09:15:17.417699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.499 [2024-11-06 09:15:17.418171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.499 [2024-11-06 09:15:17.418198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.499 [2024-11-06 09:15:17.418308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.499 [2024-11-06 09:15:17.418333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.499 pt2 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.499 [2024-11-06 09:15:17.425583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.499 "name": "raid_bdev1", 00:22:18.499 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:18.499 "strip_size_kb": 64, 00:22:18.499 "state": "configuring", 00:22:18.499 "raid_level": "raid5f", 00:22:18.499 "superblock": true, 00:22:18.499 "num_base_bdevs": 3, 00:22:18.499 "num_base_bdevs_discovered": 1, 00:22:18.499 "num_base_bdevs_operational": 3, 00:22:18.499 "base_bdevs_list": [ 00:22:18.499 { 00:22:18.499 "name": "pt1", 00:22:18.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.499 "is_configured": true, 00:22:18.499 "data_offset": 2048, 00:22:18.499 "data_size": 63488 00:22:18.499 }, 00:22:18.499 { 00:22:18.499 "name": null, 00:22:18.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.499 "is_configured": false, 00:22:18.499 "data_offset": 0, 00:22:18.499 "data_size": 63488 00:22:18.499 }, 00:22:18.499 { 00:22:18.499 "name": null, 00:22:18.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.499 "is_configured": false, 00:22:18.499 "data_offset": 2048, 00:22:18.499 "data_size": 63488 00:22:18.499 } 00:22:18.499 ] 00:22:18.499 }' 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.499 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.067 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:19.067 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.067 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:19.067 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.067 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.067 [2024-11-06 09:15:17.825398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:19.068 [2024-11-06 09:15:17.825471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.068 [2024-11-06 09:15:17.825492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:19.068 [2024-11-06 09:15:17.825506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.068 [2024-11-06 09:15:17.825976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.068 [2024-11-06 09:15:17.825999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:19.068 [2024-11-06 09:15:17.826084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:19.068 [2024-11-06 09:15:17.826110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:19.068 pt2 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.068 [2024-11-06 09:15:17.833367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:19.068 [2024-11-06 09:15:17.833419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.068 [2024-11-06 09:15:17.833436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:19.068 [2024-11-06 09:15:17.833448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.068 [2024-11-06 09:15:17.833834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.068 [2024-11-06 09:15:17.833863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:19.068 [2024-11-06 09:15:17.833930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:19.068 [2024-11-06 09:15:17.833952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:19.068 [2024-11-06 09:15:17.834073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:19.068 [2024-11-06 09:15:17.834086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:19.068 [2024-11-06 09:15:17.834351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:19.068 [2024-11-06 09:15:17.839812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:19.068 [2024-11-06 09:15:17.839833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:19.068 [2024-11-06 09:15:17.840026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.068 pt3 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.068 "name": "raid_bdev1", 00:22:19.068 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:19.068 "strip_size_kb": 64, 00:22:19.068 "state": "online", 00:22:19.068 "raid_level": "raid5f", 00:22:19.068 "superblock": true, 00:22:19.068 "num_base_bdevs": 3, 00:22:19.068 "num_base_bdevs_discovered": 3, 00:22:19.068 "num_base_bdevs_operational": 3, 00:22:19.068 "base_bdevs_list": [ 00:22:19.068 { 00:22:19.068 "name": "pt1", 00:22:19.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.068 "is_configured": true, 00:22:19.068 "data_offset": 2048, 00:22:19.068 "data_size": 63488 00:22:19.068 }, 00:22:19.068 { 00:22:19.068 "name": "pt2", 00:22:19.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.068 "is_configured": true, 00:22:19.068 "data_offset": 2048, 00:22:19.068 "data_size": 63488 00:22:19.068 }, 00:22:19.068 { 00:22:19.068 "name": "pt3", 00:22:19.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.068 "is_configured": true, 00:22:19.068 "data_offset": 2048, 00:22:19.068 "data_size": 63488 00:22:19.068 } 00:22:19.068 ] 00:22:19.068 }' 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.068 09:15:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.326 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.326 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:19.326 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.326 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.326 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.327 [2024-11-06 09:15:18.225749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.327 "name": "raid_bdev1", 00:22:19.327 "aliases": [ 00:22:19.327 "4cafa66c-f99d-44a8-815f-45739552e6da" 00:22:19.327 ], 00:22:19.327 "product_name": "Raid Volume", 00:22:19.327 "block_size": 512, 00:22:19.327 "num_blocks": 126976, 00:22:19.327 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:19.327 "assigned_rate_limits": { 00:22:19.327 "rw_ios_per_sec": 0, 00:22:19.327 "rw_mbytes_per_sec": 0, 00:22:19.327 "r_mbytes_per_sec": 0, 00:22:19.327 "w_mbytes_per_sec": 0 00:22:19.327 }, 00:22:19.327 "claimed": false, 00:22:19.327 "zoned": false, 00:22:19.327 "supported_io_types": { 00:22:19.327 "read": true, 00:22:19.327 "write": true, 00:22:19.327 "unmap": false, 00:22:19.327 "flush": false, 00:22:19.327 "reset": true, 00:22:19.327 "nvme_admin": false, 00:22:19.327 "nvme_io": false, 00:22:19.327 "nvme_io_md": false, 00:22:19.327 "write_zeroes": true, 00:22:19.327 "zcopy": false, 00:22:19.327 "get_zone_info": false, 00:22:19.327 "zone_management": false, 00:22:19.327 "zone_append": false, 00:22:19.327 "compare": false, 00:22:19.327 "compare_and_write": false, 00:22:19.327 "abort": false, 00:22:19.327 "seek_hole": false, 00:22:19.327 "seek_data": false, 00:22:19.327 "copy": false, 00:22:19.327 "nvme_iov_md": false 00:22:19.327 }, 00:22:19.327 "driver_specific": { 00:22:19.327 "raid": { 00:22:19.327 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:19.327 "strip_size_kb": 64, 00:22:19.327 "state": "online", 00:22:19.327 "raid_level": "raid5f", 00:22:19.327 "superblock": true, 00:22:19.327 "num_base_bdevs": 3, 00:22:19.327 "num_base_bdevs_discovered": 3, 00:22:19.327 "num_base_bdevs_operational": 3, 00:22:19.327 "base_bdevs_list": [ 00:22:19.327 { 00:22:19.327 "name": "pt1", 00:22:19.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.327 "is_configured": true, 00:22:19.327 "data_offset": 2048, 00:22:19.327 "data_size": 63488 00:22:19.327 }, 00:22:19.327 { 00:22:19.327 "name": "pt2", 00:22:19.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.327 "is_configured": true, 00:22:19.327 "data_offset": 2048, 00:22:19.327 "data_size": 63488 00:22:19.327 }, 00:22:19.327 { 00:22:19.327 "name": "pt3", 00:22:19.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.327 "is_configured": true, 00:22:19.327 "data_offset": 2048, 00:22:19.327 "data_size": 63488 00:22:19.327 } 00:22:19.327 ] 00:22:19.327 } 00:22:19.327 } 00:22:19.327 }' 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:19.327 pt2 00:22:19.327 pt3' 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.327 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 [2024-11-06 09:15:18.505600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4cafa66c-f99d-44a8-815f-45739552e6da '!=' 4cafa66c-f99d-44a8-815f-45739552e6da ']' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 [2024-11-06 09:15:18.545471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.586 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.586 "name": "raid_bdev1", 00:22:19.586 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:19.586 "strip_size_kb": 64, 00:22:19.586 "state": "online", 00:22:19.586 "raid_level": "raid5f", 00:22:19.586 "superblock": true, 00:22:19.586 "num_base_bdevs": 3, 00:22:19.586 "num_base_bdevs_discovered": 2, 00:22:19.586 "num_base_bdevs_operational": 2, 00:22:19.586 "base_bdevs_list": [ 00:22:19.586 { 00:22:19.586 "name": null, 00:22:19.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.586 "is_configured": false, 00:22:19.586 "data_offset": 0, 00:22:19.586 "data_size": 63488 00:22:19.586 }, 00:22:19.586 { 00:22:19.586 "name": "pt2", 00:22:19.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.586 "is_configured": true, 00:22:19.586 "data_offset": 2048, 00:22:19.586 "data_size": 63488 00:22:19.586 }, 00:22:19.586 { 00:22:19.586 "name": "pt3", 00:22:19.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.586 "is_configured": true, 00:22:19.586 "data_offset": 2048, 00:22:19.586 "data_size": 63488 00:22:19.586 } 00:22:19.586 ] 00:22:19.586 }' 00:22:19.587 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.587 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.152 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:20.152 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.152 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.152 [2024-11-06 09:15:18.976993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.152 [2024-11-06 09:15:18.977029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.152 [2024-11-06 09:15:18.977107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.152 [2024-11-06 09:15:18.977168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.153 [2024-11-06 09:15:18.977185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:20.153 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.153 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.153 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.153 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.153 09:15:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:20.153 09:15:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.153 [2024-11-06 09:15:19.060829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.153 [2024-11-06 09:15:19.060890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.153 [2024-11-06 09:15:19.060908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:20.153 [2024-11-06 09:15:19.060922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.153 [2024-11-06 09:15:19.063480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.153 [2024-11-06 09:15:19.063522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.153 [2024-11-06 09:15:19.063603] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.153 [2024-11-06 09:15:19.063649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.153 pt2 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.153 "name": "raid_bdev1", 00:22:20.153 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:20.153 "strip_size_kb": 64, 00:22:20.153 "state": "configuring", 00:22:20.153 "raid_level": "raid5f", 00:22:20.153 "superblock": true, 00:22:20.153 "num_base_bdevs": 3, 00:22:20.153 "num_base_bdevs_discovered": 1, 00:22:20.153 "num_base_bdevs_operational": 2, 00:22:20.153 "base_bdevs_list": [ 00:22:20.153 { 00:22:20.153 "name": null, 00:22:20.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.153 "is_configured": false, 00:22:20.153 "data_offset": 2048, 00:22:20.153 "data_size": 63488 00:22:20.153 }, 00:22:20.153 { 00:22:20.153 "name": "pt2", 00:22:20.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.153 "is_configured": true, 00:22:20.153 "data_offset": 2048, 00:22:20.153 "data_size": 63488 00:22:20.153 }, 00:22:20.153 { 00:22:20.153 "name": null, 00:22:20.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.153 "is_configured": false, 00:22:20.153 "data_offset": 2048, 00:22:20.153 "data_size": 63488 00:22:20.153 } 00:22:20.153 ] 00:22:20.153 }' 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.153 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.411 [2024-11-06 09:15:19.412411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:20.411 [2024-11-06 09:15:19.412484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.411 [2024-11-06 09:15:19.412511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:20.411 [2024-11-06 09:15:19.412526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.411 [2024-11-06 09:15:19.413023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.411 [2024-11-06 09:15:19.413057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:20.411 [2024-11-06 09:15:19.413145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:20.411 [2024-11-06 09:15:19.413180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:20.411 [2024-11-06 09:15:19.413316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:20.411 [2024-11-06 09:15:19.413330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:20.411 [2024-11-06 09:15:19.413584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:20.411 [2024-11-06 09:15:19.418829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:20.411 [2024-11-06 09:15:19.418855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:20.411 [2024-11-06 09:15:19.419178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.411 pt3 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.411 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.668 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.668 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.668 "name": "raid_bdev1", 00:22:20.668 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:20.668 "strip_size_kb": 64, 00:22:20.668 "state": "online", 00:22:20.668 "raid_level": "raid5f", 00:22:20.668 "superblock": true, 00:22:20.668 "num_base_bdevs": 3, 00:22:20.668 "num_base_bdevs_discovered": 2, 00:22:20.668 "num_base_bdevs_operational": 2, 00:22:20.668 "base_bdevs_list": [ 00:22:20.668 { 00:22:20.668 "name": null, 00:22:20.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.668 "is_configured": false, 00:22:20.668 "data_offset": 2048, 00:22:20.668 "data_size": 63488 00:22:20.668 }, 00:22:20.668 { 00:22:20.668 "name": "pt2", 00:22:20.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.668 "is_configured": true, 00:22:20.668 "data_offset": 2048, 00:22:20.668 "data_size": 63488 00:22:20.668 }, 00:22:20.668 { 00:22:20.668 "name": "pt3", 00:22:20.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.668 "is_configured": true, 00:22:20.668 "data_offset": 2048, 00:22:20.668 "data_size": 63488 00:22:20.668 } 00:22:20.668 ] 00:22:20.668 }' 00:22:20.668 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.668 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 [2024-11-06 09:15:19.829431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.927 [2024-11-06 09:15:19.829470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.927 [2024-11-06 09:15:19.829551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.927 [2024-11-06 09:15:19.829616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.927 [2024-11-06 09:15:19.829628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 [2024-11-06 09:15:19.901363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:20.927 [2024-11-06 09:15:19.901439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.927 [2024-11-06 09:15:19.901479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:20.927 [2024-11-06 09:15:19.901491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.927 [2024-11-06 09:15:19.904263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.927 [2024-11-06 09:15:19.904315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:20.927 [2024-11-06 09:15:19.904410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:20.927 [2024-11-06 09:15:19.904456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:20.927 [2024-11-06 09:15:19.904584] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:20.927 [2024-11-06 09:15:19.904596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.927 [2024-11-06 09:15:19.904615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:20.927 [2024-11-06 09:15:19.904689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.927 pt1 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.927 "name": "raid_bdev1", 00:22:20.927 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:20.927 "strip_size_kb": 64, 00:22:20.927 "state": "configuring", 00:22:20.927 "raid_level": "raid5f", 00:22:20.927 "superblock": true, 00:22:20.927 "num_base_bdevs": 3, 00:22:20.927 "num_base_bdevs_discovered": 1, 00:22:20.927 "num_base_bdevs_operational": 2, 00:22:20.927 "base_bdevs_list": [ 00:22:20.927 { 00:22:20.927 "name": null, 00:22:20.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.927 "is_configured": false, 00:22:20.927 "data_offset": 2048, 00:22:20.927 "data_size": 63488 00:22:20.927 }, 00:22:20.927 { 00:22:20.927 "name": "pt2", 00:22:20.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.927 "is_configured": true, 00:22:20.927 "data_offset": 2048, 00:22:20.927 "data_size": 63488 00:22:20.927 }, 00:22:20.927 { 00:22:20.927 "name": null, 00:22:20.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.927 "is_configured": false, 00:22:20.927 "data_offset": 2048, 00:22:20.927 "data_size": 63488 00:22:20.927 } 00:22:20.927 ] 00:22:20.927 }' 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.927 09:15:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.496 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.496 [2024-11-06 09:15:20.393180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:21.496 [2024-11-06 09:15:20.393358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.496 [2024-11-06 09:15:20.393394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:21.496 [2024-11-06 09:15:20.393409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.496 [2024-11-06 09:15:20.394129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.496 [2024-11-06 09:15:20.394199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:21.496 [2024-11-06 09:15:20.394377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:21.496 [2024-11-06 09:15:20.394414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:21.496 [2024-11-06 09:15:20.394614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:21.496 [2024-11-06 09:15:20.394643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:21.496 [2024-11-06 09:15:20.395042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:21.496 [2024-11-06 09:15:20.401834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:21.496 [2024-11-06 09:15:20.401952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:21.496 pt3 00:22:21.497 [2024-11-06 09:15:20.402569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.497 "name": "raid_bdev1", 00:22:21.497 "uuid": "4cafa66c-f99d-44a8-815f-45739552e6da", 00:22:21.497 "strip_size_kb": 64, 00:22:21.497 "state": "online", 00:22:21.497 "raid_level": "raid5f", 00:22:21.497 "superblock": true, 00:22:21.497 "num_base_bdevs": 3, 00:22:21.497 "num_base_bdevs_discovered": 2, 00:22:21.497 "num_base_bdevs_operational": 2, 00:22:21.497 "base_bdevs_list": [ 00:22:21.497 { 00:22:21.497 "name": null, 00:22:21.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.497 "is_configured": false, 00:22:21.497 "data_offset": 2048, 00:22:21.497 "data_size": 63488 00:22:21.497 }, 00:22:21.497 { 00:22:21.497 "name": "pt2", 00:22:21.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.497 "is_configured": true, 00:22:21.497 "data_offset": 2048, 00:22:21.497 "data_size": 63488 00:22:21.497 }, 00:22:21.497 { 00:22:21.497 "name": "pt3", 00:22:21.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.497 "is_configured": true, 00:22:21.497 "data_offset": 2048, 00:22:21.497 "data_size": 63488 00:22:21.497 } 00:22:21.497 ] 00:22:21.497 }' 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.497 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.065 [2024-11-06 09:15:20.922533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4cafa66c-f99d-44a8-815f-45739552e6da '!=' 4cafa66c-f99d-44a8-815f-45739552e6da ']' 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80837 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 80837 ']' 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 80837 00:22:22.065 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80837 00:22:22.066 killing process with pid 80837 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80837' 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 80837 00:22:22.066 [2024-11-06 09:15:20.996976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.066 [2024-11-06 09:15:20.997077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.066 09:15:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 80837 00:22:22.066 [2024-11-06 09:15:20.997142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.066 [2024-11-06 09:15:20.997170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:22.332 [2024-11-06 09:15:21.303350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:23.706 09:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:23.707 00:22:23.707 real 0m7.574s 00:22:23.707 user 0m11.780s 00:22:23.707 sys 0m1.552s 00:22:23.707 09:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:23.707 09:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 ************************************ 00:22:23.707 END TEST raid5f_superblock_test 00:22:23.707 ************************************ 00:22:23.707 09:15:22 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:22:23.707 09:15:22 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:22:23.707 09:15:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:23.707 09:15:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:23.707 09:15:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 ************************************ 00:22:23.707 START TEST raid5f_rebuild_test 00:22:23.707 ************************************ 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81281 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81281 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81281 ']' 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:23.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:23.707 09:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:23.707 Zero copy mechanism will not be used. 00:22:23.707 [2024-11-06 09:15:22.591387] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:22:23.707 [2024-11-06 09:15:22.591556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81281 ] 00:22:23.965 [2024-11-06 09:15:22.781607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.965 [2024-11-06 09:15:22.902301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.223 [2024-11-06 09:15:23.103122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.223 [2024-11-06 09:15:23.103194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.481 BaseBdev1_malloc 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.481 [2024-11-06 09:15:23.486029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:24.481 [2024-11-06 09:15:23.486097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.481 [2024-11-06 09:15:23.486123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:24.481 [2024-11-06 09:15:23.486138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.481 [2024-11-06 09:15:23.488657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.481 [2024-11-06 09:15:23.488696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:24.481 BaseBdev1 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.481 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 BaseBdev2_malloc 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 [2024-11-06 09:15:23.539472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:24.739 [2024-11-06 09:15:23.539538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.739 [2024-11-06 09:15:23.539560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:24.739 [2024-11-06 09:15:23.539577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.739 [2024-11-06 09:15:23.542026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.739 [2024-11-06 09:15:23.542071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:24.739 BaseBdev2 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 BaseBdev3_malloc 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 [2024-11-06 09:15:23.605759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:24.739 [2024-11-06 09:15:23.605821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.739 [2024-11-06 09:15:23.605845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:24.739 [2024-11-06 09:15:23.605859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.739 [2024-11-06 09:15:23.608422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.739 [2024-11-06 09:15:23.608465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:24.739 BaseBdev3 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 spare_malloc 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 spare_delay 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 [2024-11-06 09:15:23.674085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:24.739 [2024-11-06 09:15:23.674159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.739 [2024-11-06 09:15:23.674183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:24.739 [2024-11-06 09:15:23.674197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.739 [2024-11-06 09:15:23.676824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.739 [2024-11-06 09:15:23.676874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:24.739 spare 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 [2024-11-06 09:15:23.686136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.739 [2024-11-06 09:15:23.688352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.739 [2024-11-06 09:15:23.688417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:24.739 [2024-11-06 09:15:23.688511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:24.739 [2024-11-06 09:15:23.688525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:24.739 [2024-11-06 09:15:23.688839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:24.739 [2024-11-06 09:15:23.695209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:24.739 [2024-11-06 09:15:23.695236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:24.739 [2024-11-06 09:15:23.695498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.739 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.739 "name": "raid_bdev1", 00:22:24.739 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:24.739 "strip_size_kb": 64, 00:22:24.739 "state": "online", 00:22:24.739 "raid_level": "raid5f", 00:22:24.739 "superblock": false, 00:22:24.739 "num_base_bdevs": 3, 00:22:24.739 "num_base_bdevs_discovered": 3, 00:22:24.739 "num_base_bdevs_operational": 3, 00:22:24.739 "base_bdevs_list": [ 00:22:24.739 { 00:22:24.739 "name": "BaseBdev1", 00:22:24.739 "uuid": "b7341738-a69e-5e50-a214-861905647b47", 00:22:24.739 "is_configured": true, 00:22:24.739 "data_offset": 0, 00:22:24.739 "data_size": 65536 00:22:24.739 }, 00:22:24.739 { 00:22:24.739 "name": "BaseBdev2", 00:22:24.739 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:24.739 "is_configured": true, 00:22:24.739 "data_offset": 0, 00:22:24.739 "data_size": 65536 00:22:24.739 }, 00:22:24.739 { 00:22:24.739 "name": "BaseBdev3", 00:22:24.739 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:24.739 "is_configured": true, 00:22:24.739 "data_offset": 0, 00:22:24.739 "data_size": 65536 00:22:24.740 } 00:22:24.740 ] 00:22:24.740 }' 00:22:24.740 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.740 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.305 [2024-11-06 09:15:24.166522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:25.305 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:25.563 [2024-11-06 09:15:24.446392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:25.563 /dev/nbd0 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:25.563 1+0 records in 00:22:25.563 1+0 records out 00:22:25.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275009 s, 14.9 MB/s 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:25.563 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:26.130 512+0 records in 00:22:26.130 512+0 records out 00:22:26.130 67108864 bytes (67 MB, 64 MiB) copied, 0.38214 s, 176 MB/s 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:26.130 09:15:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:26.130 [2024-11-06 09:15:25.110034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.130 [2024-11-06 09:15:25.149550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.130 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.389 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.389 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.389 "name": "raid_bdev1", 00:22:26.389 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:26.389 "strip_size_kb": 64, 00:22:26.389 "state": "online", 00:22:26.389 "raid_level": "raid5f", 00:22:26.389 "superblock": false, 00:22:26.389 "num_base_bdevs": 3, 00:22:26.389 "num_base_bdevs_discovered": 2, 00:22:26.389 "num_base_bdevs_operational": 2, 00:22:26.389 "base_bdevs_list": [ 00:22:26.389 { 00:22:26.389 "name": null, 00:22:26.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.389 "is_configured": false, 00:22:26.389 "data_offset": 0, 00:22:26.389 "data_size": 65536 00:22:26.389 }, 00:22:26.389 { 00:22:26.389 "name": "BaseBdev2", 00:22:26.389 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:26.389 "is_configured": true, 00:22:26.389 "data_offset": 0, 00:22:26.389 "data_size": 65536 00:22:26.389 }, 00:22:26.389 { 00:22:26.389 "name": "BaseBdev3", 00:22:26.389 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:26.389 "is_configured": true, 00:22:26.389 "data_offset": 0, 00:22:26.389 "data_size": 65536 00:22:26.389 } 00:22:26.389 ] 00:22:26.389 }' 00:22:26.389 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.389 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.648 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:26.648 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.648 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.648 [2024-11-06 09:15:25.620948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.648 [2024-11-06 09:15:25.641075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:22:26.648 09:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.648 09:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:26.648 [2024-11-06 09:15:25.650814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.026 "name": "raid_bdev1", 00:22:28.026 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:28.026 "strip_size_kb": 64, 00:22:28.026 "state": "online", 00:22:28.026 "raid_level": "raid5f", 00:22:28.026 "superblock": false, 00:22:28.026 "num_base_bdevs": 3, 00:22:28.026 "num_base_bdevs_discovered": 3, 00:22:28.026 "num_base_bdevs_operational": 3, 00:22:28.026 "process": { 00:22:28.026 "type": "rebuild", 00:22:28.026 "target": "spare", 00:22:28.026 "progress": { 00:22:28.026 "blocks": 20480, 00:22:28.026 "percent": 15 00:22:28.026 } 00:22:28.026 }, 00:22:28.026 "base_bdevs_list": [ 00:22:28.026 { 00:22:28.026 "name": "spare", 00:22:28.026 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:28.026 "is_configured": true, 00:22:28.026 "data_offset": 0, 00:22:28.026 "data_size": 65536 00:22:28.026 }, 00:22:28.026 { 00:22:28.026 "name": "BaseBdev2", 00:22:28.026 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:28.026 "is_configured": true, 00:22:28.026 "data_offset": 0, 00:22:28.026 "data_size": 65536 00:22:28.026 }, 00:22:28.026 { 00:22:28.026 "name": "BaseBdev3", 00:22:28.026 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:28.026 "is_configured": true, 00:22:28.026 "data_offset": 0, 00:22:28.026 "data_size": 65536 00:22:28.026 } 00:22:28.026 ] 00:22:28.026 }' 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.026 [2024-11-06 09:15:26.794432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.026 [2024-11-06 09:15:26.861401] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:28.026 [2024-11-06 09:15:26.861678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.026 [2024-11-06 09:15:26.861814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.026 [2024-11-06 09:15:26.861861] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.026 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.026 "name": "raid_bdev1", 00:22:28.026 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:28.026 "strip_size_kb": 64, 00:22:28.026 "state": "online", 00:22:28.026 "raid_level": "raid5f", 00:22:28.026 "superblock": false, 00:22:28.026 "num_base_bdevs": 3, 00:22:28.026 "num_base_bdevs_discovered": 2, 00:22:28.026 "num_base_bdevs_operational": 2, 00:22:28.026 "base_bdevs_list": [ 00:22:28.026 { 00:22:28.026 "name": null, 00:22:28.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.026 "is_configured": false, 00:22:28.026 "data_offset": 0, 00:22:28.026 "data_size": 65536 00:22:28.026 }, 00:22:28.026 { 00:22:28.026 "name": "BaseBdev2", 00:22:28.026 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:28.026 "is_configured": true, 00:22:28.026 "data_offset": 0, 00:22:28.026 "data_size": 65536 00:22:28.026 }, 00:22:28.026 { 00:22:28.026 "name": "BaseBdev3", 00:22:28.026 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:28.026 "is_configured": true, 00:22:28.026 "data_offset": 0, 00:22:28.026 "data_size": 65536 00:22:28.026 } 00:22:28.026 ] 00:22:28.027 }' 00:22:28.027 09:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.027 09:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.598 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.598 "name": "raid_bdev1", 00:22:28.598 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:28.598 "strip_size_kb": 64, 00:22:28.598 "state": "online", 00:22:28.598 "raid_level": "raid5f", 00:22:28.598 "superblock": false, 00:22:28.598 "num_base_bdevs": 3, 00:22:28.598 "num_base_bdevs_discovered": 2, 00:22:28.598 "num_base_bdevs_operational": 2, 00:22:28.598 "base_bdevs_list": [ 00:22:28.598 { 00:22:28.599 "name": null, 00:22:28.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.599 "is_configured": false, 00:22:28.599 "data_offset": 0, 00:22:28.599 "data_size": 65536 00:22:28.599 }, 00:22:28.599 { 00:22:28.599 "name": "BaseBdev2", 00:22:28.599 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:28.599 "is_configured": true, 00:22:28.599 "data_offset": 0, 00:22:28.599 "data_size": 65536 00:22:28.599 }, 00:22:28.599 { 00:22:28.599 "name": "BaseBdev3", 00:22:28.599 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:28.599 "is_configured": true, 00:22:28.599 "data_offset": 0, 00:22:28.599 "data_size": 65536 00:22:28.599 } 00:22:28.599 ] 00:22:28.599 }' 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.599 [2024-11-06 09:15:27.476119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.599 [2024-11-06 09:15:27.492017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.599 09:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:28.599 [2024-11-06 09:15:27.499735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.551 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.551 "name": "raid_bdev1", 00:22:29.551 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:29.551 "strip_size_kb": 64, 00:22:29.551 "state": "online", 00:22:29.551 "raid_level": "raid5f", 00:22:29.551 "superblock": false, 00:22:29.551 "num_base_bdevs": 3, 00:22:29.551 "num_base_bdevs_discovered": 3, 00:22:29.551 "num_base_bdevs_operational": 3, 00:22:29.551 "process": { 00:22:29.551 "type": "rebuild", 00:22:29.551 "target": "spare", 00:22:29.551 "progress": { 00:22:29.551 "blocks": 20480, 00:22:29.552 "percent": 15 00:22:29.552 } 00:22:29.552 }, 00:22:29.552 "base_bdevs_list": [ 00:22:29.552 { 00:22:29.552 "name": "spare", 00:22:29.552 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:29.552 "is_configured": true, 00:22:29.552 "data_offset": 0, 00:22:29.552 "data_size": 65536 00:22:29.552 }, 00:22:29.552 { 00:22:29.552 "name": "BaseBdev2", 00:22:29.552 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:29.552 "is_configured": true, 00:22:29.552 "data_offset": 0, 00:22:29.552 "data_size": 65536 00:22:29.552 }, 00:22:29.552 { 00:22:29.552 "name": "BaseBdev3", 00:22:29.552 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:29.552 "is_configured": true, 00:22:29.552 "data_offset": 0, 00:22:29.552 "data_size": 65536 00:22:29.552 } 00:22:29.552 ] 00:22:29.552 }' 00:22:29.552 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=543 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.812 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.812 "name": "raid_bdev1", 00:22:29.812 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:29.812 "strip_size_kb": 64, 00:22:29.812 "state": "online", 00:22:29.812 "raid_level": "raid5f", 00:22:29.812 "superblock": false, 00:22:29.812 "num_base_bdevs": 3, 00:22:29.812 "num_base_bdevs_discovered": 3, 00:22:29.812 "num_base_bdevs_operational": 3, 00:22:29.812 "process": { 00:22:29.812 "type": "rebuild", 00:22:29.812 "target": "spare", 00:22:29.812 "progress": { 00:22:29.812 "blocks": 22528, 00:22:29.813 "percent": 17 00:22:29.813 } 00:22:29.813 }, 00:22:29.813 "base_bdevs_list": [ 00:22:29.813 { 00:22:29.813 "name": "spare", 00:22:29.813 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:29.813 "is_configured": true, 00:22:29.813 "data_offset": 0, 00:22:29.813 "data_size": 65536 00:22:29.813 }, 00:22:29.813 { 00:22:29.813 "name": "BaseBdev2", 00:22:29.813 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:29.813 "is_configured": true, 00:22:29.813 "data_offset": 0, 00:22:29.813 "data_size": 65536 00:22:29.813 }, 00:22:29.813 { 00:22:29.813 "name": "BaseBdev3", 00:22:29.813 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:29.813 "is_configured": true, 00:22:29.813 "data_offset": 0, 00:22:29.813 "data_size": 65536 00:22:29.813 } 00:22:29.813 ] 00:22:29.813 }' 00:22:29.813 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.813 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.813 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.813 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.813 09:15:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.753 09:15:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.016 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.016 "name": "raid_bdev1", 00:22:31.016 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:31.016 "strip_size_kb": 64, 00:22:31.016 "state": "online", 00:22:31.016 "raid_level": "raid5f", 00:22:31.016 "superblock": false, 00:22:31.016 "num_base_bdevs": 3, 00:22:31.016 "num_base_bdevs_discovered": 3, 00:22:31.016 "num_base_bdevs_operational": 3, 00:22:31.016 "process": { 00:22:31.016 "type": "rebuild", 00:22:31.016 "target": "spare", 00:22:31.016 "progress": { 00:22:31.016 "blocks": 45056, 00:22:31.016 "percent": 34 00:22:31.016 } 00:22:31.016 }, 00:22:31.016 "base_bdevs_list": [ 00:22:31.016 { 00:22:31.016 "name": "spare", 00:22:31.016 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:31.016 "is_configured": true, 00:22:31.016 "data_offset": 0, 00:22:31.016 "data_size": 65536 00:22:31.016 }, 00:22:31.016 { 00:22:31.016 "name": "BaseBdev2", 00:22:31.016 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:31.016 "is_configured": true, 00:22:31.016 "data_offset": 0, 00:22:31.016 "data_size": 65536 00:22:31.016 }, 00:22:31.016 { 00:22:31.016 "name": "BaseBdev3", 00:22:31.017 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:31.017 "is_configured": true, 00:22:31.017 "data_offset": 0, 00:22:31.017 "data_size": 65536 00:22:31.017 } 00:22:31.017 ] 00:22:31.017 }' 00:22:31.017 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.017 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.017 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.017 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.017 09:15:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.953 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.953 "name": "raid_bdev1", 00:22:31.953 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:31.953 "strip_size_kb": 64, 00:22:31.953 "state": "online", 00:22:31.953 "raid_level": "raid5f", 00:22:31.953 "superblock": false, 00:22:31.953 "num_base_bdevs": 3, 00:22:31.953 "num_base_bdevs_discovered": 3, 00:22:31.953 "num_base_bdevs_operational": 3, 00:22:31.953 "process": { 00:22:31.953 "type": "rebuild", 00:22:31.953 "target": "spare", 00:22:31.953 "progress": { 00:22:31.953 "blocks": 67584, 00:22:31.953 "percent": 51 00:22:31.953 } 00:22:31.953 }, 00:22:31.953 "base_bdevs_list": [ 00:22:31.953 { 00:22:31.953 "name": "spare", 00:22:31.954 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:31.954 "is_configured": true, 00:22:31.954 "data_offset": 0, 00:22:31.954 "data_size": 65536 00:22:31.954 }, 00:22:31.954 { 00:22:31.954 "name": "BaseBdev2", 00:22:31.954 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:31.954 "is_configured": true, 00:22:31.954 "data_offset": 0, 00:22:31.954 "data_size": 65536 00:22:31.954 }, 00:22:31.954 { 00:22:31.954 "name": "BaseBdev3", 00:22:31.954 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:31.954 "is_configured": true, 00:22:31.954 "data_offset": 0, 00:22:31.954 "data_size": 65536 00:22:31.954 } 00:22:31.954 ] 00:22:31.954 }' 00:22:31.954 09:15:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.212 09:15:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.212 09:15:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.212 09:15:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.212 09:15:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.182 "name": "raid_bdev1", 00:22:33.182 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:33.182 "strip_size_kb": 64, 00:22:33.182 "state": "online", 00:22:33.182 "raid_level": "raid5f", 00:22:33.182 "superblock": false, 00:22:33.182 "num_base_bdevs": 3, 00:22:33.182 "num_base_bdevs_discovered": 3, 00:22:33.182 "num_base_bdevs_operational": 3, 00:22:33.182 "process": { 00:22:33.182 "type": "rebuild", 00:22:33.182 "target": "spare", 00:22:33.182 "progress": { 00:22:33.182 "blocks": 92160, 00:22:33.182 "percent": 70 00:22:33.182 } 00:22:33.182 }, 00:22:33.182 "base_bdevs_list": [ 00:22:33.182 { 00:22:33.182 "name": "spare", 00:22:33.182 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:33.182 "is_configured": true, 00:22:33.182 "data_offset": 0, 00:22:33.182 "data_size": 65536 00:22:33.182 }, 00:22:33.182 { 00:22:33.182 "name": "BaseBdev2", 00:22:33.182 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:33.182 "is_configured": true, 00:22:33.182 "data_offset": 0, 00:22:33.182 "data_size": 65536 00:22:33.182 }, 00:22:33.182 { 00:22:33.182 "name": "BaseBdev3", 00:22:33.182 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:33.182 "is_configured": true, 00:22:33.182 "data_offset": 0, 00:22:33.182 "data_size": 65536 00:22:33.182 } 00:22:33.182 ] 00:22:33.182 }' 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.182 09:15:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.558 "name": "raid_bdev1", 00:22:34.558 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:34.558 "strip_size_kb": 64, 00:22:34.558 "state": "online", 00:22:34.558 "raid_level": "raid5f", 00:22:34.558 "superblock": false, 00:22:34.558 "num_base_bdevs": 3, 00:22:34.558 "num_base_bdevs_discovered": 3, 00:22:34.558 "num_base_bdevs_operational": 3, 00:22:34.558 "process": { 00:22:34.558 "type": "rebuild", 00:22:34.558 "target": "spare", 00:22:34.558 "progress": { 00:22:34.558 "blocks": 114688, 00:22:34.558 "percent": 87 00:22:34.558 } 00:22:34.558 }, 00:22:34.558 "base_bdevs_list": [ 00:22:34.558 { 00:22:34.558 "name": "spare", 00:22:34.558 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:34.558 "is_configured": true, 00:22:34.558 "data_offset": 0, 00:22:34.558 "data_size": 65536 00:22:34.558 }, 00:22:34.558 { 00:22:34.558 "name": "BaseBdev2", 00:22:34.558 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:34.558 "is_configured": true, 00:22:34.558 "data_offset": 0, 00:22:34.558 "data_size": 65536 00:22:34.558 }, 00:22:34.558 { 00:22:34.558 "name": "BaseBdev3", 00:22:34.558 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:34.558 "is_configured": true, 00:22:34.558 "data_offset": 0, 00:22:34.558 "data_size": 65536 00:22:34.558 } 00:22:34.558 ] 00:22:34.558 }' 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.558 09:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:35.125 [2024-11-06 09:15:33.954631] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:35.125 [2024-11-06 09:15:33.954736] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:35.125 [2024-11-06 09:15:33.954786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.384 "name": "raid_bdev1", 00:22:35.384 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:35.384 "strip_size_kb": 64, 00:22:35.384 "state": "online", 00:22:35.384 "raid_level": "raid5f", 00:22:35.384 "superblock": false, 00:22:35.384 "num_base_bdevs": 3, 00:22:35.384 "num_base_bdevs_discovered": 3, 00:22:35.384 "num_base_bdevs_operational": 3, 00:22:35.384 "base_bdevs_list": [ 00:22:35.384 { 00:22:35.384 "name": "spare", 00:22:35.384 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:35.384 "is_configured": true, 00:22:35.384 "data_offset": 0, 00:22:35.384 "data_size": 65536 00:22:35.384 }, 00:22:35.384 { 00:22:35.384 "name": "BaseBdev2", 00:22:35.384 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:35.384 "is_configured": true, 00:22:35.384 "data_offset": 0, 00:22:35.384 "data_size": 65536 00:22:35.384 }, 00:22:35.384 { 00:22:35.384 "name": "BaseBdev3", 00:22:35.384 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:35.384 "is_configured": true, 00:22:35.384 "data_offset": 0, 00:22:35.384 "data_size": 65536 00:22:35.384 } 00:22:35.384 ] 00:22:35.384 }' 00:22:35.384 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.643 "name": "raid_bdev1", 00:22:35.643 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:35.643 "strip_size_kb": 64, 00:22:35.643 "state": "online", 00:22:35.643 "raid_level": "raid5f", 00:22:35.643 "superblock": false, 00:22:35.643 "num_base_bdevs": 3, 00:22:35.643 "num_base_bdevs_discovered": 3, 00:22:35.643 "num_base_bdevs_operational": 3, 00:22:35.643 "base_bdevs_list": [ 00:22:35.643 { 00:22:35.643 "name": "spare", 00:22:35.643 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:35.643 "is_configured": true, 00:22:35.643 "data_offset": 0, 00:22:35.643 "data_size": 65536 00:22:35.643 }, 00:22:35.643 { 00:22:35.643 "name": "BaseBdev2", 00:22:35.643 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:35.643 "is_configured": true, 00:22:35.643 "data_offset": 0, 00:22:35.643 "data_size": 65536 00:22:35.643 }, 00:22:35.643 { 00:22:35.643 "name": "BaseBdev3", 00:22:35.643 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:35.643 "is_configured": true, 00:22:35.643 "data_offset": 0, 00:22:35.643 "data_size": 65536 00:22:35.643 } 00:22:35.643 ] 00:22:35.643 }' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.643 "name": "raid_bdev1", 00:22:35.643 "uuid": "bf2fd8f7-539d-469d-837f-85e20582896f", 00:22:35.643 "strip_size_kb": 64, 00:22:35.643 "state": "online", 00:22:35.643 "raid_level": "raid5f", 00:22:35.643 "superblock": false, 00:22:35.643 "num_base_bdevs": 3, 00:22:35.643 "num_base_bdevs_discovered": 3, 00:22:35.643 "num_base_bdevs_operational": 3, 00:22:35.643 "base_bdevs_list": [ 00:22:35.643 { 00:22:35.643 "name": "spare", 00:22:35.643 "uuid": "dc2364fb-8dc2-523b-992a-1276210b1f17", 00:22:35.643 "is_configured": true, 00:22:35.643 "data_offset": 0, 00:22:35.643 "data_size": 65536 00:22:35.643 }, 00:22:35.643 { 00:22:35.643 "name": "BaseBdev2", 00:22:35.643 "uuid": "1bf2cbc0-6157-520b-8961-6e23367fb500", 00:22:35.643 "is_configured": true, 00:22:35.643 "data_offset": 0, 00:22:35.643 "data_size": 65536 00:22:35.643 }, 00:22:35.643 { 00:22:35.643 "name": "BaseBdev3", 00:22:35.643 "uuid": "3b241bb4-912b-557a-a563-1e969ff47962", 00:22:35.643 "is_configured": true, 00:22:35.643 "data_offset": 0, 00:22:35.643 "data_size": 65536 00:22:35.643 } 00:22:35.643 ] 00:22:35.643 }' 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.643 09:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.211 [2024-11-06 09:15:35.046602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.211 [2024-11-06 09:15:35.046639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.211 [2024-11-06 09:15:35.046728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.211 [2024-11-06 09:15:35.046818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.211 [2024-11-06 09:15:35.046837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.211 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:36.471 /dev/nbd0 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.471 1+0 records in 00:22:36.471 1+0 records out 00:22:36.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378613 s, 10.8 MB/s 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.471 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:36.731 /dev/nbd1 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.731 1+0 records in 00:22:36.731 1+0 records out 00:22:36.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523612 s, 7.8 MB/s 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.731 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.989 09:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.247 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81281 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81281 ']' 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81281 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:22:37.505 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:37.506 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81281 00:22:37.506 killing process with pid 81281 00:22:37.506 Received shutdown signal, test time was about 60.000000 seconds 00:22:37.506 00:22:37.506 Latency(us) 00:22:37.506 [2024-11-06T09:15:36.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.506 [2024-11-06T09:15:36.546Z] =================================================================================================================== 00:22:37.506 [2024-11-06T09:15:36.546Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.506 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:37.506 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:37.506 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81281' 00:22:37.506 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81281 00:22:37.506 [2024-11-06 09:15:36.333963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.506 09:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81281 00:22:37.764 [2024-11-06 09:15:36.744386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:39.140 00:22:39.140 real 0m15.411s 00:22:39.140 user 0m18.814s 00:22:39.140 sys 0m2.241s 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 ************************************ 00:22:39.140 END TEST raid5f_rebuild_test 00:22:39.140 ************************************ 00:22:39.140 09:15:37 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:39.140 09:15:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:39.140 09:15:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:39.140 09:15:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 ************************************ 00:22:39.140 START TEST raid5f_rebuild_test_sb 00:22:39.140 ************************************ 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81719 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81719 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81719 ']' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.140 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.140 [2024-11-06 09:15:38.093812] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:22:39.140 [2024-11-06 09:15:38.094008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81719 ] 00:22:39.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:39.140 Zero copy mechanism will not be used. 00:22:39.399 [2024-11-06 09:15:38.273948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.399 [2024-11-06 09:15:38.388182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.656 [2024-11-06 09:15:38.596609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.656 [2024-11-06 09:15:38.596684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.914 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:39.914 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:39.914 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:39.914 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:39.914 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.914 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.172 BaseBdev1_malloc 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.172 [2024-11-06 09:15:38.981814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:40.172 [2024-11-06 09:15:38.981888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.172 [2024-11-06 09:15:38.981914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:40.172 [2024-11-06 09:15:38.981929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.172 [2024-11-06 09:15:38.984372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.172 [2024-11-06 09:15:38.984414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:40.172 BaseBdev1 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:40.172 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 BaseBdev2_malloc 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 [2024-11-06 09:15:39.037660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:40.173 [2024-11-06 09:15:39.037733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.173 [2024-11-06 09:15:39.037755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:40.173 [2024-11-06 09:15:39.037769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.173 [2024-11-06 09:15:39.040107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.173 [2024-11-06 09:15:39.040150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:40.173 BaseBdev2 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 BaseBdev3_malloc 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 [2024-11-06 09:15:39.104479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:40.173 [2024-11-06 09:15:39.104541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.173 [2024-11-06 09:15:39.104565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:40.173 [2024-11-06 09:15:39.104580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.173 [2024-11-06 09:15:39.106932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.173 [2024-11-06 09:15:39.106981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:40.173 BaseBdev3 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 spare_malloc 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 spare_delay 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 [2024-11-06 09:15:39.173112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.173 [2024-11-06 09:15:39.173169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.173 [2024-11-06 09:15:39.173190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:40.173 [2024-11-06 09:15:39.173203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.173 [2024-11-06 09:15:39.175598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.173 [2024-11-06 09:15:39.175644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.173 spare 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 [2024-11-06 09:15:39.185169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.173 [2024-11-06 09:15:39.187192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.173 [2024-11-06 09:15:39.187262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.173 [2024-11-06 09:15:39.187443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:40.173 [2024-11-06 09:15:39.187458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:40.173 [2024-11-06 09:15:39.187722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:40.173 [2024-11-06 09:15:39.193701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:40.173 [2024-11-06 09:15:39.193732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:40.173 [2024-11-06 09:15:39.193918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.173 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.431 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.431 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.431 "name": "raid_bdev1", 00:22:40.431 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:40.431 "strip_size_kb": 64, 00:22:40.431 "state": "online", 00:22:40.431 "raid_level": "raid5f", 00:22:40.431 "superblock": true, 00:22:40.431 "num_base_bdevs": 3, 00:22:40.431 "num_base_bdevs_discovered": 3, 00:22:40.431 "num_base_bdevs_operational": 3, 00:22:40.431 "base_bdevs_list": [ 00:22:40.431 { 00:22:40.431 "name": "BaseBdev1", 00:22:40.431 "uuid": "e6e4fb6b-4824-5513-8e65-1ca154b4b444", 00:22:40.431 "is_configured": true, 00:22:40.431 "data_offset": 2048, 00:22:40.431 "data_size": 63488 00:22:40.431 }, 00:22:40.431 { 00:22:40.431 "name": "BaseBdev2", 00:22:40.431 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:40.431 "is_configured": true, 00:22:40.431 "data_offset": 2048, 00:22:40.431 "data_size": 63488 00:22:40.431 }, 00:22:40.431 { 00:22:40.431 "name": "BaseBdev3", 00:22:40.431 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:40.431 "is_configured": true, 00:22:40.431 "data_offset": 2048, 00:22:40.431 "data_size": 63488 00:22:40.431 } 00:22:40.431 ] 00:22:40.431 }' 00:22:40.431 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.431 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.689 [2024-11-06 09:15:39.591801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:40.689 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:40.947 [2024-11-06 09:15:39.867393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:40.947 /dev/nbd0 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:40.947 1+0 records in 00:22:40.947 1+0 records out 00:22:40.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463336 s, 8.8 MB/s 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:40.947 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:41.512 496+0 records in 00:22:41.512 496+0 records out 00:22:41.512 65011712 bytes (65 MB, 62 MiB) copied, 0.432869 s, 150 MB/s 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:41.512 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:41.772 [2024-11-06 09:15:40.623801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.772 [2024-11-06 09:15:40.642857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.772 "name": "raid_bdev1", 00:22:41.772 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:41.772 "strip_size_kb": 64, 00:22:41.772 "state": "online", 00:22:41.772 "raid_level": "raid5f", 00:22:41.772 "superblock": true, 00:22:41.772 "num_base_bdevs": 3, 00:22:41.772 "num_base_bdevs_discovered": 2, 00:22:41.772 "num_base_bdevs_operational": 2, 00:22:41.772 "base_bdevs_list": [ 00:22:41.772 { 00:22:41.772 "name": null, 00:22:41.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.772 "is_configured": false, 00:22:41.772 "data_offset": 0, 00:22:41.772 "data_size": 63488 00:22:41.772 }, 00:22:41.772 { 00:22:41.772 "name": "BaseBdev2", 00:22:41.772 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:41.772 "is_configured": true, 00:22:41.772 "data_offset": 2048, 00:22:41.772 "data_size": 63488 00:22:41.772 }, 00:22:41.772 { 00:22:41.772 "name": "BaseBdev3", 00:22:41.772 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:41.772 "is_configured": true, 00:22:41.772 "data_offset": 2048, 00:22:41.772 "data_size": 63488 00:22:41.772 } 00:22:41.772 ] 00:22:41.772 }' 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.772 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.339 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.339 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.339 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.339 [2024-11-06 09:15:41.082469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.339 [2024-11-06 09:15:41.100743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:22:42.339 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.339 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:42.339 [2024-11-06 09:15:41.108545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.271 "name": "raid_bdev1", 00:22:43.271 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:43.271 "strip_size_kb": 64, 00:22:43.271 "state": "online", 00:22:43.271 "raid_level": "raid5f", 00:22:43.271 "superblock": true, 00:22:43.271 "num_base_bdevs": 3, 00:22:43.271 "num_base_bdevs_discovered": 3, 00:22:43.271 "num_base_bdevs_operational": 3, 00:22:43.271 "process": { 00:22:43.271 "type": "rebuild", 00:22:43.271 "target": "spare", 00:22:43.271 "progress": { 00:22:43.271 "blocks": 18432, 00:22:43.271 "percent": 14 00:22:43.271 } 00:22:43.271 }, 00:22:43.271 "base_bdevs_list": [ 00:22:43.271 { 00:22:43.271 "name": "spare", 00:22:43.271 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:43.271 "is_configured": true, 00:22:43.271 "data_offset": 2048, 00:22:43.271 "data_size": 63488 00:22:43.271 }, 00:22:43.271 { 00:22:43.271 "name": "BaseBdev2", 00:22:43.271 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:43.271 "is_configured": true, 00:22:43.271 "data_offset": 2048, 00:22:43.271 "data_size": 63488 00:22:43.271 }, 00:22:43.271 { 00:22:43.271 "name": "BaseBdev3", 00:22:43.271 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:43.271 "is_configured": true, 00:22:43.271 "data_offset": 2048, 00:22:43.271 "data_size": 63488 00:22:43.271 } 00:22:43.271 ] 00:22:43.271 }' 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.271 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.271 [2024-11-06 09:15:42.244380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:43.529 [2024-11-06 09:15:42.319366] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:43.529 [2024-11-06 09:15:42.319461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.529 [2024-11-06 09:15:42.319487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:43.529 [2024-11-06 09:15:42.319499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.529 "name": "raid_bdev1", 00:22:43.529 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:43.529 "strip_size_kb": 64, 00:22:43.529 "state": "online", 00:22:43.529 "raid_level": "raid5f", 00:22:43.529 "superblock": true, 00:22:43.529 "num_base_bdevs": 3, 00:22:43.529 "num_base_bdevs_discovered": 2, 00:22:43.529 "num_base_bdevs_operational": 2, 00:22:43.529 "base_bdevs_list": [ 00:22:43.529 { 00:22:43.529 "name": null, 00:22:43.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.529 "is_configured": false, 00:22:43.529 "data_offset": 0, 00:22:43.529 "data_size": 63488 00:22:43.529 }, 00:22:43.529 { 00:22:43.529 "name": "BaseBdev2", 00:22:43.529 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:43.529 "is_configured": true, 00:22:43.529 "data_offset": 2048, 00:22:43.529 "data_size": 63488 00:22:43.529 }, 00:22:43.529 { 00:22:43.529 "name": "BaseBdev3", 00:22:43.529 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:43.529 "is_configured": true, 00:22:43.529 "data_offset": 2048, 00:22:43.529 "data_size": 63488 00:22:43.529 } 00:22:43.529 ] 00:22:43.529 }' 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.529 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.788 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:44.046 "name": "raid_bdev1", 00:22:44.046 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:44.046 "strip_size_kb": 64, 00:22:44.046 "state": "online", 00:22:44.046 "raid_level": "raid5f", 00:22:44.046 "superblock": true, 00:22:44.046 "num_base_bdevs": 3, 00:22:44.046 "num_base_bdevs_discovered": 2, 00:22:44.046 "num_base_bdevs_operational": 2, 00:22:44.046 "base_bdevs_list": [ 00:22:44.046 { 00:22:44.046 "name": null, 00:22:44.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.046 "is_configured": false, 00:22:44.046 "data_offset": 0, 00:22:44.046 "data_size": 63488 00:22:44.046 }, 00:22:44.046 { 00:22:44.046 "name": "BaseBdev2", 00:22:44.046 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:44.046 "is_configured": true, 00:22:44.046 "data_offset": 2048, 00:22:44.046 "data_size": 63488 00:22:44.046 }, 00:22:44.046 { 00:22:44.046 "name": "BaseBdev3", 00:22:44.046 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:44.046 "is_configured": true, 00:22:44.046 "data_offset": 2048, 00:22:44.046 "data_size": 63488 00:22:44.046 } 00:22:44.046 ] 00:22:44.046 }' 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.046 [2024-11-06 09:15:42.932407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:44.046 [2024-11-06 09:15:42.950738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.046 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:44.046 [2024-11-06 09:15:42.959647] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.028 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.028 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.028 "name": "raid_bdev1", 00:22:45.028 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:45.028 "strip_size_kb": 64, 00:22:45.028 "state": "online", 00:22:45.028 "raid_level": "raid5f", 00:22:45.028 "superblock": true, 00:22:45.028 "num_base_bdevs": 3, 00:22:45.028 "num_base_bdevs_discovered": 3, 00:22:45.028 "num_base_bdevs_operational": 3, 00:22:45.028 "process": { 00:22:45.028 "type": "rebuild", 00:22:45.028 "target": "spare", 00:22:45.028 "progress": { 00:22:45.028 "blocks": 20480, 00:22:45.028 "percent": 16 00:22:45.028 } 00:22:45.028 }, 00:22:45.028 "base_bdevs_list": [ 00:22:45.028 { 00:22:45.028 "name": "spare", 00:22:45.028 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:45.028 "is_configured": true, 00:22:45.028 "data_offset": 2048, 00:22:45.028 "data_size": 63488 00:22:45.028 }, 00:22:45.028 { 00:22:45.028 "name": "BaseBdev2", 00:22:45.028 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:45.028 "is_configured": true, 00:22:45.028 "data_offset": 2048, 00:22:45.028 "data_size": 63488 00:22:45.028 }, 00:22:45.028 { 00:22:45.028 "name": "BaseBdev3", 00:22:45.028 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:45.028 "is_configured": true, 00:22:45.028 "data_offset": 2048, 00:22:45.028 "data_size": 63488 00:22:45.028 } 00:22:45.028 ] 00:22:45.028 }' 00:22:45.028 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.028 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.028 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:45.287 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=559 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.287 "name": "raid_bdev1", 00:22:45.287 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:45.287 "strip_size_kb": 64, 00:22:45.287 "state": "online", 00:22:45.287 "raid_level": "raid5f", 00:22:45.287 "superblock": true, 00:22:45.287 "num_base_bdevs": 3, 00:22:45.287 "num_base_bdevs_discovered": 3, 00:22:45.287 "num_base_bdevs_operational": 3, 00:22:45.287 "process": { 00:22:45.287 "type": "rebuild", 00:22:45.287 "target": "spare", 00:22:45.287 "progress": { 00:22:45.287 "blocks": 22528, 00:22:45.287 "percent": 17 00:22:45.287 } 00:22:45.287 }, 00:22:45.287 "base_bdevs_list": [ 00:22:45.287 { 00:22:45.287 "name": "spare", 00:22:45.287 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:45.287 "is_configured": true, 00:22:45.287 "data_offset": 2048, 00:22:45.287 "data_size": 63488 00:22:45.287 }, 00:22:45.287 { 00:22:45.287 "name": "BaseBdev2", 00:22:45.287 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:45.287 "is_configured": true, 00:22:45.287 "data_offset": 2048, 00:22:45.287 "data_size": 63488 00:22:45.287 }, 00:22:45.287 { 00:22:45.287 "name": "BaseBdev3", 00:22:45.287 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:45.287 "is_configured": true, 00:22:45.287 "data_offset": 2048, 00:22:45.287 "data_size": 63488 00:22:45.287 } 00:22:45.287 ] 00:22:45.287 }' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.287 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.223 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.481 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:46.481 "name": "raid_bdev1", 00:22:46.481 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:46.481 "strip_size_kb": 64, 00:22:46.481 "state": "online", 00:22:46.481 "raid_level": "raid5f", 00:22:46.481 "superblock": true, 00:22:46.481 "num_base_bdevs": 3, 00:22:46.481 "num_base_bdevs_discovered": 3, 00:22:46.481 "num_base_bdevs_operational": 3, 00:22:46.481 "process": { 00:22:46.481 "type": "rebuild", 00:22:46.481 "target": "spare", 00:22:46.481 "progress": { 00:22:46.481 "blocks": 45056, 00:22:46.481 "percent": 35 00:22:46.481 } 00:22:46.481 }, 00:22:46.481 "base_bdevs_list": [ 00:22:46.482 { 00:22:46.482 "name": "spare", 00:22:46.482 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:46.482 "is_configured": true, 00:22:46.482 "data_offset": 2048, 00:22:46.482 "data_size": 63488 00:22:46.482 }, 00:22:46.482 { 00:22:46.482 "name": "BaseBdev2", 00:22:46.482 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:46.482 "is_configured": true, 00:22:46.482 "data_offset": 2048, 00:22:46.482 "data_size": 63488 00:22:46.482 }, 00:22:46.482 { 00:22:46.482 "name": "BaseBdev3", 00:22:46.482 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:46.482 "is_configured": true, 00:22:46.482 "data_offset": 2048, 00:22:46.482 "data_size": 63488 00:22:46.482 } 00:22:46.482 ] 00:22:46.482 }' 00:22:46.482 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:46.482 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.482 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:46.482 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.482 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:47.417 "name": "raid_bdev1", 00:22:47.417 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:47.417 "strip_size_kb": 64, 00:22:47.417 "state": "online", 00:22:47.417 "raid_level": "raid5f", 00:22:47.417 "superblock": true, 00:22:47.417 "num_base_bdevs": 3, 00:22:47.417 "num_base_bdevs_discovered": 3, 00:22:47.417 "num_base_bdevs_operational": 3, 00:22:47.417 "process": { 00:22:47.417 "type": "rebuild", 00:22:47.417 "target": "spare", 00:22:47.417 "progress": { 00:22:47.417 "blocks": 67584, 00:22:47.417 "percent": 53 00:22:47.417 } 00:22:47.417 }, 00:22:47.417 "base_bdevs_list": [ 00:22:47.417 { 00:22:47.417 "name": "spare", 00:22:47.417 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:47.417 "is_configured": true, 00:22:47.417 "data_offset": 2048, 00:22:47.417 "data_size": 63488 00:22:47.417 }, 00:22:47.417 { 00:22:47.417 "name": "BaseBdev2", 00:22:47.417 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:47.417 "is_configured": true, 00:22:47.417 "data_offset": 2048, 00:22:47.417 "data_size": 63488 00:22:47.417 }, 00:22:47.417 { 00:22:47.417 "name": "BaseBdev3", 00:22:47.417 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:47.417 "is_configured": true, 00:22:47.417 "data_offset": 2048, 00:22:47.417 "data_size": 63488 00:22:47.417 } 00:22:47.417 ] 00:22:47.417 }' 00:22:47.417 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:47.675 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.675 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:47.675 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.675 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.611 "name": "raid_bdev1", 00:22:48.611 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:48.611 "strip_size_kb": 64, 00:22:48.611 "state": "online", 00:22:48.611 "raid_level": "raid5f", 00:22:48.611 "superblock": true, 00:22:48.611 "num_base_bdevs": 3, 00:22:48.611 "num_base_bdevs_discovered": 3, 00:22:48.611 "num_base_bdevs_operational": 3, 00:22:48.611 "process": { 00:22:48.611 "type": "rebuild", 00:22:48.611 "target": "spare", 00:22:48.611 "progress": { 00:22:48.611 "blocks": 92160, 00:22:48.611 "percent": 72 00:22:48.611 } 00:22:48.611 }, 00:22:48.611 "base_bdevs_list": [ 00:22:48.611 { 00:22:48.611 "name": "spare", 00:22:48.611 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:48.611 "is_configured": true, 00:22:48.611 "data_offset": 2048, 00:22:48.611 "data_size": 63488 00:22:48.611 }, 00:22:48.611 { 00:22:48.611 "name": "BaseBdev2", 00:22:48.611 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:48.611 "is_configured": true, 00:22:48.611 "data_offset": 2048, 00:22:48.611 "data_size": 63488 00:22:48.611 }, 00:22:48.611 { 00:22:48.611 "name": "BaseBdev3", 00:22:48.611 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:48.611 "is_configured": true, 00:22:48.611 "data_offset": 2048, 00:22:48.611 "data_size": 63488 00:22:48.611 } 00:22:48.611 ] 00:22:48.611 }' 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.611 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.869 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.869 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:49.804 "name": "raid_bdev1", 00:22:49.804 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:49.804 "strip_size_kb": 64, 00:22:49.804 "state": "online", 00:22:49.804 "raid_level": "raid5f", 00:22:49.804 "superblock": true, 00:22:49.804 "num_base_bdevs": 3, 00:22:49.804 "num_base_bdevs_discovered": 3, 00:22:49.804 "num_base_bdevs_operational": 3, 00:22:49.804 "process": { 00:22:49.804 "type": "rebuild", 00:22:49.804 "target": "spare", 00:22:49.804 "progress": { 00:22:49.804 "blocks": 114688, 00:22:49.804 "percent": 90 00:22:49.804 } 00:22:49.804 }, 00:22:49.804 "base_bdevs_list": [ 00:22:49.804 { 00:22:49.804 "name": "spare", 00:22:49.804 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:49.804 "is_configured": true, 00:22:49.804 "data_offset": 2048, 00:22:49.804 "data_size": 63488 00:22:49.804 }, 00:22:49.804 { 00:22:49.804 "name": "BaseBdev2", 00:22:49.804 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:49.804 "is_configured": true, 00:22:49.804 "data_offset": 2048, 00:22:49.804 "data_size": 63488 00:22:49.804 }, 00:22:49.804 { 00:22:49.804 "name": "BaseBdev3", 00:22:49.804 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:49.804 "is_configured": true, 00:22:49.804 "data_offset": 2048, 00:22:49.804 "data_size": 63488 00:22:49.804 } 00:22:49.804 ] 00:22:49.804 }' 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.804 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:50.369 [2024-11-06 09:15:49.219644] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:50.369 [2024-11-06 09:15:49.219745] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:50.369 [2024-11-06 09:15:49.219892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:50.937 "name": "raid_bdev1", 00:22:50.937 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:50.937 "strip_size_kb": 64, 00:22:50.937 "state": "online", 00:22:50.937 "raid_level": "raid5f", 00:22:50.937 "superblock": true, 00:22:50.937 "num_base_bdevs": 3, 00:22:50.937 "num_base_bdevs_discovered": 3, 00:22:50.937 "num_base_bdevs_operational": 3, 00:22:50.937 "base_bdevs_list": [ 00:22:50.937 { 00:22:50.937 "name": "spare", 00:22:50.937 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:50.937 "is_configured": true, 00:22:50.937 "data_offset": 2048, 00:22:50.937 "data_size": 63488 00:22:50.937 }, 00:22:50.937 { 00:22:50.937 "name": "BaseBdev2", 00:22:50.937 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:50.937 "is_configured": true, 00:22:50.937 "data_offset": 2048, 00:22:50.937 "data_size": 63488 00:22:50.937 }, 00:22:50.937 { 00:22:50.937 "name": "BaseBdev3", 00:22:50.937 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:50.937 "is_configured": true, 00:22:50.937 "data_offset": 2048, 00:22:50.937 "data_size": 63488 00:22:50.937 } 00:22:50.937 ] 00:22:50.937 }' 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.937 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.195 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.195 "name": "raid_bdev1", 00:22:51.195 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:51.195 "strip_size_kb": 64, 00:22:51.195 "state": "online", 00:22:51.195 "raid_level": "raid5f", 00:22:51.195 "superblock": true, 00:22:51.195 "num_base_bdevs": 3, 00:22:51.195 "num_base_bdevs_discovered": 3, 00:22:51.195 "num_base_bdevs_operational": 3, 00:22:51.195 "base_bdevs_list": [ 00:22:51.195 { 00:22:51.195 "name": "spare", 00:22:51.195 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:51.195 "is_configured": true, 00:22:51.195 "data_offset": 2048, 00:22:51.195 "data_size": 63488 00:22:51.195 }, 00:22:51.195 { 00:22:51.195 "name": "BaseBdev2", 00:22:51.195 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:51.195 "is_configured": true, 00:22:51.195 "data_offset": 2048, 00:22:51.195 "data_size": 63488 00:22:51.195 }, 00:22:51.195 { 00:22:51.195 "name": "BaseBdev3", 00:22:51.195 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:51.195 "is_configured": true, 00:22:51.195 "data_offset": 2048, 00:22:51.195 "data_size": 63488 00:22:51.195 } 00:22:51.195 ] 00:22:51.195 }' 00:22:51.195 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.195 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.196 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.196 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.196 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.196 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.196 "name": "raid_bdev1", 00:22:51.196 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:51.196 "strip_size_kb": 64, 00:22:51.196 "state": "online", 00:22:51.196 "raid_level": "raid5f", 00:22:51.196 "superblock": true, 00:22:51.196 "num_base_bdevs": 3, 00:22:51.196 "num_base_bdevs_discovered": 3, 00:22:51.196 "num_base_bdevs_operational": 3, 00:22:51.196 "base_bdevs_list": [ 00:22:51.196 { 00:22:51.196 "name": "spare", 00:22:51.196 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:51.196 "is_configured": true, 00:22:51.196 "data_offset": 2048, 00:22:51.196 "data_size": 63488 00:22:51.196 }, 00:22:51.196 { 00:22:51.196 "name": "BaseBdev2", 00:22:51.196 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:51.196 "is_configured": true, 00:22:51.196 "data_offset": 2048, 00:22:51.196 "data_size": 63488 00:22:51.196 }, 00:22:51.196 { 00:22:51.196 "name": "BaseBdev3", 00:22:51.196 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:51.196 "is_configured": true, 00:22:51.196 "data_offset": 2048, 00:22:51.196 "data_size": 63488 00:22:51.196 } 00:22:51.196 ] 00:22:51.196 }' 00:22:51.196 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.196 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.764 [2024-11-06 09:15:50.564611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.764 [2024-11-06 09:15:50.564646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.764 [2024-11-06 09:15:50.564742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.764 [2024-11-06 09:15:50.564828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.764 [2024-11-06 09:15:50.564848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:51.764 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:52.025 /dev/nbd0 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:52.025 1+0 records in 00:22:52.025 1+0 records out 00:22:52.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067986 s, 6.0 MB/s 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:52.025 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:52.289 /dev/nbd1 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:52.289 1+0 records in 00:22:52.289 1+0 records out 00:22:52.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479644 s, 8.5 MB/s 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:52.289 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.554 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.821 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.090 [2024-11-06 09:15:51.957713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.090 [2024-11-06 09:15:51.957806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.090 [2024-11-06 09:15:51.957837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:53.090 [2024-11-06 09:15:51.957857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.090 [2024-11-06 09:15:51.961617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.090 [2024-11-06 09:15:51.961683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.090 [2024-11-06 09:15:51.961849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:53.090 [2024-11-06 09:15:51.961964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.090 spare 00:22:53.090 [2024-11-06 09:15:51.962553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.090 [2024-11-06 09:15:51.962940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.090 09:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.090 [2024-11-06 09:15:52.063019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:53.090 [2024-11-06 09:15:52.063097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:53.090 [2024-11-06 09:15:52.063551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:22:53.090 [2024-11-06 09:15:52.070344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:53.090 [2024-11-06 09:15:52.070382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:53.090 [2024-11-06 09:15:52.070663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.090 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.361 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.361 "name": "raid_bdev1", 00:22:53.361 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:53.361 "strip_size_kb": 64, 00:22:53.361 "state": "online", 00:22:53.361 "raid_level": "raid5f", 00:22:53.361 "superblock": true, 00:22:53.361 "num_base_bdevs": 3, 00:22:53.361 "num_base_bdevs_discovered": 3, 00:22:53.361 "num_base_bdevs_operational": 3, 00:22:53.361 "base_bdevs_list": [ 00:22:53.361 { 00:22:53.361 "name": "spare", 00:22:53.361 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:53.361 "is_configured": true, 00:22:53.361 "data_offset": 2048, 00:22:53.361 "data_size": 63488 00:22:53.361 }, 00:22:53.361 { 00:22:53.361 "name": "BaseBdev2", 00:22:53.361 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:53.361 "is_configured": true, 00:22:53.361 "data_offset": 2048, 00:22:53.361 "data_size": 63488 00:22:53.361 }, 00:22:53.361 { 00:22:53.361 "name": "BaseBdev3", 00:22:53.361 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:53.361 "is_configured": true, 00:22:53.361 "data_offset": 2048, 00:22:53.361 "data_size": 63488 00:22:53.361 } 00:22:53.361 ] 00:22:53.361 }' 00:22:53.361 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.361 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.633 "name": "raid_bdev1", 00:22:53.633 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:53.633 "strip_size_kb": 64, 00:22:53.633 "state": "online", 00:22:53.633 "raid_level": "raid5f", 00:22:53.633 "superblock": true, 00:22:53.633 "num_base_bdevs": 3, 00:22:53.633 "num_base_bdevs_discovered": 3, 00:22:53.633 "num_base_bdevs_operational": 3, 00:22:53.633 "base_bdevs_list": [ 00:22:53.633 { 00:22:53.633 "name": "spare", 00:22:53.633 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:53.633 "is_configured": true, 00:22:53.633 "data_offset": 2048, 00:22:53.633 "data_size": 63488 00:22:53.633 }, 00:22:53.633 { 00:22:53.633 "name": "BaseBdev2", 00:22:53.633 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:53.633 "is_configured": true, 00:22:53.633 "data_offset": 2048, 00:22:53.633 "data_size": 63488 00:22:53.633 }, 00:22:53.633 { 00:22:53.633 "name": "BaseBdev3", 00:22:53.633 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:53.633 "is_configured": true, 00:22:53.633 "data_offset": 2048, 00:22:53.633 "data_size": 63488 00:22:53.633 } 00:22:53.633 ] 00:22:53.633 }' 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:53.633 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.908 [2024-11-06 09:15:52.758352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.908 "name": "raid_bdev1", 00:22:53.908 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:53.908 "strip_size_kb": 64, 00:22:53.908 "state": "online", 00:22:53.908 "raid_level": "raid5f", 00:22:53.908 "superblock": true, 00:22:53.908 "num_base_bdevs": 3, 00:22:53.908 "num_base_bdevs_discovered": 2, 00:22:53.908 "num_base_bdevs_operational": 2, 00:22:53.908 "base_bdevs_list": [ 00:22:53.908 { 00:22:53.908 "name": null, 00:22:53.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.908 "is_configured": false, 00:22:53.908 "data_offset": 0, 00:22:53.908 "data_size": 63488 00:22:53.908 }, 00:22:53.908 { 00:22:53.908 "name": "BaseBdev2", 00:22:53.908 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:53.908 "is_configured": true, 00:22:53.908 "data_offset": 2048, 00:22:53.908 "data_size": 63488 00:22:53.908 }, 00:22:53.908 { 00:22:53.908 "name": "BaseBdev3", 00:22:53.908 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:53.908 "is_configured": true, 00:22:53.908 "data_offset": 2048, 00:22:53.908 "data_size": 63488 00:22:53.908 } 00:22:53.908 ] 00:22:53.908 }' 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.908 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.479 09:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:54.479 09:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.479 09:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.479 [2024-11-06 09:15:53.234394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:54.479 [2024-11-06 09:15:53.234619] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:54.479 [2024-11-06 09:15:53.234655] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:54.479 [2024-11-06 09:15:53.234699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:54.479 [2024-11-06 09:15:53.253249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:22:54.479 09:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.479 09:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:54.479 [2024-11-06 09:15:53.262452] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.415 "name": "raid_bdev1", 00:22:55.415 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:55.415 "strip_size_kb": 64, 00:22:55.415 "state": "online", 00:22:55.415 "raid_level": "raid5f", 00:22:55.415 "superblock": true, 00:22:55.415 "num_base_bdevs": 3, 00:22:55.415 "num_base_bdevs_discovered": 3, 00:22:55.415 "num_base_bdevs_operational": 3, 00:22:55.415 "process": { 00:22:55.415 "type": "rebuild", 00:22:55.415 "target": "spare", 00:22:55.415 "progress": { 00:22:55.415 "blocks": 18432, 00:22:55.415 "percent": 14 00:22:55.415 } 00:22:55.415 }, 00:22:55.415 "base_bdevs_list": [ 00:22:55.415 { 00:22:55.415 "name": "spare", 00:22:55.415 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:55.415 "is_configured": true, 00:22:55.415 "data_offset": 2048, 00:22:55.415 "data_size": 63488 00:22:55.415 }, 00:22:55.415 { 00:22:55.415 "name": "BaseBdev2", 00:22:55.415 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:55.415 "is_configured": true, 00:22:55.415 "data_offset": 2048, 00:22:55.415 "data_size": 63488 00:22:55.415 }, 00:22:55.415 { 00:22:55.415 "name": "BaseBdev3", 00:22:55.415 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:55.415 "is_configured": true, 00:22:55.415 "data_offset": 2048, 00:22:55.415 "data_size": 63488 00:22:55.415 } 00:22:55.415 ] 00:22:55.415 }' 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.415 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.415 [2024-11-06 09:15:54.402536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.674 [2024-11-06 09:15:54.473733] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:55.674 [2024-11-06 09:15:54.473839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.674 [2024-11-06 09:15:54.473861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.674 [2024-11-06 09:15:54.473877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.674 "name": "raid_bdev1", 00:22:55.674 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:55.674 "strip_size_kb": 64, 00:22:55.674 "state": "online", 00:22:55.674 "raid_level": "raid5f", 00:22:55.674 "superblock": true, 00:22:55.674 "num_base_bdevs": 3, 00:22:55.674 "num_base_bdevs_discovered": 2, 00:22:55.674 "num_base_bdevs_operational": 2, 00:22:55.674 "base_bdevs_list": [ 00:22:55.674 { 00:22:55.674 "name": null, 00:22:55.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.674 "is_configured": false, 00:22:55.674 "data_offset": 0, 00:22:55.674 "data_size": 63488 00:22:55.674 }, 00:22:55.674 { 00:22:55.674 "name": "BaseBdev2", 00:22:55.674 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:55.674 "is_configured": true, 00:22:55.674 "data_offset": 2048, 00:22:55.674 "data_size": 63488 00:22:55.674 }, 00:22:55.674 { 00:22:55.674 "name": "BaseBdev3", 00:22:55.674 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:55.674 "is_configured": true, 00:22:55.674 "data_offset": 2048, 00:22:55.674 "data_size": 63488 00:22:55.674 } 00:22:55.674 ] 00:22:55.674 }' 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.674 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.241 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:56.241 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.241 09:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.241 [2024-11-06 09:15:54.994665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.241 [2024-11-06 09:15:54.994745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.241 [2024-11-06 09:15:54.994773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:56.241 [2024-11-06 09:15:54.994794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.241 [2024-11-06 09:15:54.995357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.242 [2024-11-06 09:15:54.995403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.242 [2024-11-06 09:15:54.995522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:56.242 [2024-11-06 09:15:54.995542] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:56.242 [2024-11-06 09:15:54.995556] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:56.242 [2024-11-06 09:15:54.995586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.242 [2024-11-06 09:15:55.014221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:22:56.242 spare 00:22:56.242 09:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.242 09:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:56.242 [2024-11-06 09:15:55.023233] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.177 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.177 "name": "raid_bdev1", 00:22:57.177 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:57.177 "strip_size_kb": 64, 00:22:57.177 "state": "online", 00:22:57.177 "raid_level": "raid5f", 00:22:57.177 "superblock": true, 00:22:57.177 "num_base_bdevs": 3, 00:22:57.177 "num_base_bdevs_discovered": 3, 00:22:57.177 "num_base_bdevs_operational": 3, 00:22:57.177 "process": { 00:22:57.177 "type": "rebuild", 00:22:57.177 "target": "spare", 00:22:57.177 "progress": { 00:22:57.177 "blocks": 20480, 00:22:57.177 "percent": 16 00:22:57.177 } 00:22:57.177 }, 00:22:57.177 "base_bdevs_list": [ 00:22:57.177 { 00:22:57.177 "name": "spare", 00:22:57.177 "uuid": "f83e3c87-fe78-5bbd-b5e2-de4cefef1359", 00:22:57.177 "is_configured": true, 00:22:57.177 "data_offset": 2048, 00:22:57.177 "data_size": 63488 00:22:57.177 }, 00:22:57.177 { 00:22:57.178 "name": "BaseBdev2", 00:22:57.178 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:57.178 "is_configured": true, 00:22:57.178 "data_offset": 2048, 00:22:57.178 "data_size": 63488 00:22:57.178 }, 00:22:57.178 { 00:22:57.178 "name": "BaseBdev3", 00:22:57.178 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:57.178 "is_configured": true, 00:22:57.178 "data_offset": 2048, 00:22:57.178 "data_size": 63488 00:22:57.178 } 00:22:57.178 ] 00:22:57.178 }' 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.178 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.178 [2024-11-06 09:15:56.178521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.437 [2024-11-06 09:15:56.234457] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:57.437 [2024-11-06 09:15:56.234536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.437 [2024-11-06 09:15:56.234561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.437 [2024-11-06 09:15:56.234572] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.437 "name": "raid_bdev1", 00:22:57.437 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:57.437 "strip_size_kb": 64, 00:22:57.437 "state": "online", 00:22:57.437 "raid_level": "raid5f", 00:22:57.437 "superblock": true, 00:22:57.437 "num_base_bdevs": 3, 00:22:57.437 "num_base_bdevs_discovered": 2, 00:22:57.437 "num_base_bdevs_operational": 2, 00:22:57.437 "base_bdevs_list": [ 00:22:57.437 { 00:22:57.437 "name": null, 00:22:57.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.437 "is_configured": false, 00:22:57.437 "data_offset": 0, 00:22:57.437 "data_size": 63488 00:22:57.437 }, 00:22:57.437 { 00:22:57.437 "name": "BaseBdev2", 00:22:57.437 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:57.437 "is_configured": true, 00:22:57.437 "data_offset": 2048, 00:22:57.437 "data_size": 63488 00:22:57.437 }, 00:22:57.437 { 00:22:57.437 "name": "BaseBdev3", 00:22:57.437 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:57.437 "is_configured": true, 00:22:57.437 "data_offset": 2048, 00:22:57.437 "data_size": 63488 00:22:57.437 } 00:22:57.437 ] 00:22:57.437 }' 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.437 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.696 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.696 "name": "raid_bdev1", 00:22:57.696 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:57.696 "strip_size_kb": 64, 00:22:57.696 "state": "online", 00:22:57.696 "raid_level": "raid5f", 00:22:57.696 "superblock": true, 00:22:57.696 "num_base_bdevs": 3, 00:22:57.697 "num_base_bdevs_discovered": 2, 00:22:57.697 "num_base_bdevs_operational": 2, 00:22:57.697 "base_bdevs_list": [ 00:22:57.697 { 00:22:57.697 "name": null, 00:22:57.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.697 "is_configured": false, 00:22:57.697 "data_offset": 0, 00:22:57.697 "data_size": 63488 00:22:57.697 }, 00:22:57.697 { 00:22:57.697 "name": "BaseBdev2", 00:22:57.697 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:57.697 "is_configured": true, 00:22:57.697 "data_offset": 2048, 00:22:57.697 "data_size": 63488 00:22:57.697 }, 00:22:57.697 { 00:22:57.697 "name": "BaseBdev3", 00:22:57.697 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:57.697 "is_configured": true, 00:22:57.697 "data_offset": 2048, 00:22:57.697 "data_size": 63488 00:22:57.697 } 00:22:57.697 ] 00:22:57.697 }' 00:22:57.697 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.697 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.956 [2024-11-06 09:15:56.796594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:57.956 [2024-11-06 09:15:56.796663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.956 [2024-11-06 09:15:56.796694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:57.956 [2024-11-06 09:15:56.796709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.956 [2024-11-06 09:15:56.797244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.956 [2024-11-06 09:15:56.797298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:57.956 [2024-11-06 09:15:56.797392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:57.956 [2024-11-06 09:15:56.797432] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:57.956 [2024-11-06 09:15:56.797458] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:57.956 [2024-11-06 09:15:56.797478] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:57.956 BaseBdev1 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.956 09:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.893 "name": "raid_bdev1", 00:22:58.893 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:58.893 "strip_size_kb": 64, 00:22:58.893 "state": "online", 00:22:58.893 "raid_level": "raid5f", 00:22:58.893 "superblock": true, 00:22:58.893 "num_base_bdevs": 3, 00:22:58.893 "num_base_bdevs_discovered": 2, 00:22:58.893 "num_base_bdevs_operational": 2, 00:22:58.893 "base_bdevs_list": [ 00:22:58.893 { 00:22:58.893 "name": null, 00:22:58.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.893 "is_configured": false, 00:22:58.893 "data_offset": 0, 00:22:58.893 "data_size": 63488 00:22:58.893 }, 00:22:58.893 { 00:22:58.893 "name": "BaseBdev2", 00:22:58.893 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:58.893 "is_configured": true, 00:22:58.893 "data_offset": 2048, 00:22:58.893 "data_size": 63488 00:22:58.893 }, 00:22:58.893 { 00:22:58.893 "name": "BaseBdev3", 00:22:58.893 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:58.893 "is_configured": true, 00:22:58.893 "data_offset": 2048, 00:22:58.893 "data_size": 63488 00:22:58.893 } 00:22:58.893 ] 00:22:58.893 }' 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.893 09:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.460 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.460 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.460 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.461 "name": "raid_bdev1", 00:22:59.461 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:22:59.461 "strip_size_kb": 64, 00:22:59.461 "state": "online", 00:22:59.461 "raid_level": "raid5f", 00:22:59.461 "superblock": true, 00:22:59.461 "num_base_bdevs": 3, 00:22:59.461 "num_base_bdevs_discovered": 2, 00:22:59.461 "num_base_bdevs_operational": 2, 00:22:59.461 "base_bdevs_list": [ 00:22:59.461 { 00:22:59.461 "name": null, 00:22:59.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.461 "is_configured": false, 00:22:59.461 "data_offset": 0, 00:22:59.461 "data_size": 63488 00:22:59.461 }, 00:22:59.461 { 00:22:59.461 "name": "BaseBdev2", 00:22:59.461 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:22:59.461 "is_configured": true, 00:22:59.461 "data_offset": 2048, 00:22:59.461 "data_size": 63488 00:22:59.461 }, 00:22:59.461 { 00:22:59.461 "name": "BaseBdev3", 00:22:59.461 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:22:59.461 "is_configured": true, 00:22:59.461 "data_offset": 2048, 00:22:59.461 "data_size": 63488 00:22:59.461 } 00:22:59.461 ] 00:22:59.461 }' 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.461 [2024-11-06 09:15:58.426783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.461 [2024-11-06 09:15:58.426985] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:59.461 [2024-11-06 09:15:58.427008] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:59.461 request: 00:22:59.461 { 00:22:59.461 "base_bdev": "BaseBdev1", 00:22:59.461 "raid_bdev": "raid_bdev1", 00:22:59.461 "method": "bdev_raid_add_base_bdev", 00:22:59.461 "req_id": 1 00:22:59.461 } 00:22:59.461 Got JSON-RPC error response 00:22:59.461 response: 00:22:59.461 { 00:22:59.461 "code": -22, 00:22:59.461 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:59.461 } 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.461 09:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.844 "name": "raid_bdev1", 00:23:00.844 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:23:00.844 "strip_size_kb": 64, 00:23:00.844 "state": "online", 00:23:00.844 "raid_level": "raid5f", 00:23:00.844 "superblock": true, 00:23:00.844 "num_base_bdevs": 3, 00:23:00.844 "num_base_bdevs_discovered": 2, 00:23:00.844 "num_base_bdevs_operational": 2, 00:23:00.844 "base_bdevs_list": [ 00:23:00.844 { 00:23:00.844 "name": null, 00:23:00.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.844 "is_configured": false, 00:23:00.844 "data_offset": 0, 00:23:00.844 "data_size": 63488 00:23:00.844 }, 00:23:00.844 { 00:23:00.844 "name": "BaseBdev2", 00:23:00.844 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:23:00.844 "is_configured": true, 00:23:00.844 "data_offset": 2048, 00:23:00.844 "data_size": 63488 00:23:00.844 }, 00:23:00.844 { 00:23:00.844 "name": "BaseBdev3", 00:23:00.844 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:23:00.844 "is_configured": true, 00:23:00.844 "data_offset": 2048, 00:23:00.844 "data_size": 63488 00:23:00.844 } 00:23:00.844 ] 00:23:00.844 }' 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.844 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.106 "name": "raid_bdev1", 00:23:01.106 "uuid": "bab9f0ec-581c-4830-8e1a-df4b57fb84b0", 00:23:01.106 "strip_size_kb": 64, 00:23:01.106 "state": "online", 00:23:01.106 "raid_level": "raid5f", 00:23:01.106 "superblock": true, 00:23:01.106 "num_base_bdevs": 3, 00:23:01.106 "num_base_bdevs_discovered": 2, 00:23:01.106 "num_base_bdevs_operational": 2, 00:23:01.106 "base_bdevs_list": [ 00:23:01.106 { 00:23:01.106 "name": null, 00:23:01.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.106 "is_configured": false, 00:23:01.106 "data_offset": 0, 00:23:01.106 "data_size": 63488 00:23:01.106 }, 00:23:01.106 { 00:23:01.106 "name": "BaseBdev2", 00:23:01.106 "uuid": "2ba183bb-b9a9-5c06-ae6d-809e7bc65a8d", 00:23:01.106 "is_configured": true, 00:23:01.106 "data_offset": 2048, 00:23:01.106 "data_size": 63488 00:23:01.106 }, 00:23:01.106 { 00:23:01.106 "name": "BaseBdev3", 00:23:01.106 "uuid": "ca0b1443-0f57-53b2-bb6d-5915995e3400", 00:23:01.106 "is_configured": true, 00:23:01.106 "data_offset": 2048, 00:23:01.106 "data_size": 63488 00:23:01.106 } 00:23:01.106 ] 00:23:01.106 }' 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.106 09:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81719 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81719 ']' 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 81719 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81719 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81719' 00:23:01.106 killing process with pid 81719 00:23:01.106 Received shutdown signal, test time was about 60.000000 seconds 00:23:01.106 00:23:01.106 Latency(us) 00:23:01.106 [2024-11-06T09:16:00.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.106 [2024-11-06T09:16:00.146Z] =================================================================================================================== 00:23:01.106 [2024-11-06T09:16:00.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 81719 00:23:01.106 [2024-11-06 09:16:00.078061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:01.106 09:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 81719 00:23:01.106 [2024-11-06 09:16:00.078238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.106 [2024-11-06 09:16:00.078333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.106 [2024-11-06 09:16:00.078354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:01.673 [2024-11-06 09:16:00.511888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:03.051 09:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:03.051 00:23:03.051 real 0m23.713s 00:23:03.051 user 0m30.205s 00:23:03.051 sys 0m3.290s 00:23:03.051 09:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:03.051 09:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.051 ************************************ 00:23:03.051 END TEST raid5f_rebuild_test_sb 00:23:03.051 ************************************ 00:23:03.051 09:16:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:23:03.051 09:16:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:03.051 09:16:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:03.051 09:16:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:03.051 09:16:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:03.051 ************************************ 00:23:03.051 START TEST raid5f_state_function_test 00:23:03.051 ************************************ 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:03.051 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82472 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82472' 00:23:03.052 Process raid pid: 82472 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82472 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 82472 ']' 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:03.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:03.052 09:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.052 [2024-11-06 09:16:01.885701] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:23:03.052 [2024-11-06 09:16:01.885837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.052 [2024-11-06 09:16:02.070505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.310 [2024-11-06 09:16:02.221734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.569 [2024-11-06 09:16:02.450095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.569 [2024-11-06 09:16:02.450147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.139 [2024-11-06 09:16:02.872896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:04.139 [2024-11-06 09:16:02.872960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:04.139 [2024-11-06 09:16:02.872973] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:04.139 [2024-11-06 09:16:02.872987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:04.139 [2024-11-06 09:16:02.872996] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:04.139 [2024-11-06 09:16:02.873008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:04.139 [2024-11-06 09:16:02.873017] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:04.139 [2024-11-06 09:16:02.873029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.139 "name": "Existed_Raid", 00:23:04.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.139 "strip_size_kb": 64, 00:23:04.139 "state": "configuring", 00:23:04.139 "raid_level": "raid5f", 00:23:04.139 "superblock": false, 00:23:04.139 "num_base_bdevs": 4, 00:23:04.139 "num_base_bdevs_discovered": 0, 00:23:04.139 "num_base_bdevs_operational": 4, 00:23:04.139 "base_bdevs_list": [ 00:23:04.139 { 00:23:04.139 "name": "BaseBdev1", 00:23:04.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.139 "is_configured": false, 00:23:04.139 "data_offset": 0, 00:23:04.139 "data_size": 0 00:23:04.139 }, 00:23:04.139 { 00:23:04.139 "name": "BaseBdev2", 00:23:04.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.139 "is_configured": false, 00:23:04.139 "data_offset": 0, 00:23:04.139 "data_size": 0 00:23:04.139 }, 00:23:04.139 { 00:23:04.139 "name": "BaseBdev3", 00:23:04.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.139 "is_configured": false, 00:23:04.139 "data_offset": 0, 00:23:04.139 "data_size": 0 00:23:04.139 }, 00:23:04.139 { 00:23:04.139 "name": "BaseBdev4", 00:23:04.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.139 "is_configured": false, 00:23:04.139 "data_offset": 0, 00:23:04.139 "data_size": 0 00:23:04.139 } 00:23:04.139 ] 00:23:04.139 }' 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.139 09:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.398 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:04.398 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.398 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.398 [2024-11-06 09:16:03.300256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:04.398 [2024-11-06 09:16:03.300310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:04.398 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.399 [2024-11-06 09:16:03.312240] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:04.399 [2024-11-06 09:16:03.312302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:04.399 [2024-11-06 09:16:03.312313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:04.399 [2024-11-06 09:16:03.312327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:04.399 [2024-11-06 09:16:03.312335] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:04.399 [2024-11-06 09:16:03.312347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:04.399 [2024-11-06 09:16:03.312355] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:04.399 [2024-11-06 09:16:03.312367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.399 [2024-11-06 09:16:03.365086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.399 BaseBdev1 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.399 [ 00:23:04.399 { 00:23:04.399 "name": "BaseBdev1", 00:23:04.399 "aliases": [ 00:23:04.399 "65979bcc-9991-454e-b1dc-41bc3c90742c" 00:23:04.399 ], 00:23:04.399 "product_name": "Malloc disk", 00:23:04.399 "block_size": 512, 00:23:04.399 "num_blocks": 65536, 00:23:04.399 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:04.399 "assigned_rate_limits": { 00:23:04.399 "rw_ios_per_sec": 0, 00:23:04.399 "rw_mbytes_per_sec": 0, 00:23:04.399 "r_mbytes_per_sec": 0, 00:23:04.399 "w_mbytes_per_sec": 0 00:23:04.399 }, 00:23:04.399 "claimed": true, 00:23:04.399 "claim_type": "exclusive_write", 00:23:04.399 "zoned": false, 00:23:04.399 "supported_io_types": { 00:23:04.399 "read": true, 00:23:04.399 "write": true, 00:23:04.399 "unmap": true, 00:23:04.399 "flush": true, 00:23:04.399 "reset": true, 00:23:04.399 "nvme_admin": false, 00:23:04.399 "nvme_io": false, 00:23:04.399 "nvme_io_md": false, 00:23:04.399 "write_zeroes": true, 00:23:04.399 "zcopy": true, 00:23:04.399 "get_zone_info": false, 00:23:04.399 "zone_management": false, 00:23:04.399 "zone_append": false, 00:23:04.399 "compare": false, 00:23:04.399 "compare_and_write": false, 00:23:04.399 "abort": true, 00:23:04.399 "seek_hole": false, 00:23:04.399 "seek_data": false, 00:23:04.399 "copy": true, 00:23:04.399 "nvme_iov_md": false 00:23:04.399 }, 00:23:04.399 "memory_domains": [ 00:23:04.399 { 00:23:04.399 "dma_device_id": "system", 00:23:04.399 "dma_device_type": 1 00:23:04.399 }, 00:23:04.399 { 00:23:04.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.399 "dma_device_type": 2 00:23:04.399 } 00:23:04.399 ], 00:23:04.399 "driver_specific": {} 00:23:04.399 } 00:23:04.399 ] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.399 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.658 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.658 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.658 "name": "Existed_Raid", 00:23:04.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.658 "strip_size_kb": 64, 00:23:04.658 "state": "configuring", 00:23:04.658 "raid_level": "raid5f", 00:23:04.658 "superblock": false, 00:23:04.658 "num_base_bdevs": 4, 00:23:04.658 "num_base_bdevs_discovered": 1, 00:23:04.658 "num_base_bdevs_operational": 4, 00:23:04.658 "base_bdevs_list": [ 00:23:04.658 { 00:23:04.658 "name": "BaseBdev1", 00:23:04.658 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:04.658 "is_configured": true, 00:23:04.658 "data_offset": 0, 00:23:04.658 "data_size": 65536 00:23:04.658 }, 00:23:04.658 { 00:23:04.658 "name": "BaseBdev2", 00:23:04.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.658 "is_configured": false, 00:23:04.658 "data_offset": 0, 00:23:04.658 "data_size": 0 00:23:04.658 }, 00:23:04.658 { 00:23:04.658 "name": "BaseBdev3", 00:23:04.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.658 "is_configured": false, 00:23:04.658 "data_offset": 0, 00:23:04.658 "data_size": 0 00:23:04.658 }, 00:23:04.658 { 00:23:04.658 "name": "BaseBdev4", 00:23:04.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.658 "is_configured": false, 00:23:04.658 "data_offset": 0, 00:23:04.658 "data_size": 0 00:23:04.658 } 00:23:04.658 ] 00:23:04.658 }' 00:23:04.658 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.658 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.917 [2024-11-06 09:16:03.864470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:04.917 [2024-11-06 09:16:03.864536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.917 [2024-11-06 09:16:03.876492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.917 [2024-11-06 09:16:03.878656] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:04.917 [2024-11-06 09:16:03.878706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:04.917 [2024-11-06 09:16:03.878718] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:04.917 [2024-11-06 09:16:03.878733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:04.917 [2024-11-06 09:16:03.878742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:04.917 [2024-11-06 09:16:03.878754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.917 "name": "Existed_Raid", 00:23:04.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.917 "strip_size_kb": 64, 00:23:04.917 "state": "configuring", 00:23:04.917 "raid_level": "raid5f", 00:23:04.917 "superblock": false, 00:23:04.917 "num_base_bdevs": 4, 00:23:04.917 "num_base_bdevs_discovered": 1, 00:23:04.917 "num_base_bdevs_operational": 4, 00:23:04.917 "base_bdevs_list": [ 00:23:04.917 { 00:23:04.917 "name": "BaseBdev1", 00:23:04.917 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:04.917 "is_configured": true, 00:23:04.917 "data_offset": 0, 00:23:04.917 "data_size": 65536 00:23:04.917 }, 00:23:04.917 { 00:23:04.917 "name": "BaseBdev2", 00:23:04.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.917 "is_configured": false, 00:23:04.917 "data_offset": 0, 00:23:04.917 "data_size": 0 00:23:04.917 }, 00:23:04.917 { 00:23:04.917 "name": "BaseBdev3", 00:23:04.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.917 "is_configured": false, 00:23:04.917 "data_offset": 0, 00:23:04.917 "data_size": 0 00:23:04.917 }, 00:23:04.917 { 00:23:04.917 "name": "BaseBdev4", 00:23:04.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.917 "is_configured": false, 00:23:04.917 "data_offset": 0, 00:23:04.917 "data_size": 0 00:23:04.917 } 00:23:04.917 ] 00:23:04.917 }' 00:23:04.917 09:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.918 09:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.485 [2024-11-06 09:16:04.346119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.485 BaseBdev2 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.485 [ 00:23:05.485 { 00:23:05.485 "name": "BaseBdev2", 00:23:05.485 "aliases": [ 00:23:05.485 "06f595ef-9270-417d-910f-9bdcc41c2813" 00:23:05.485 ], 00:23:05.485 "product_name": "Malloc disk", 00:23:05.485 "block_size": 512, 00:23:05.485 "num_blocks": 65536, 00:23:05.485 "uuid": "06f595ef-9270-417d-910f-9bdcc41c2813", 00:23:05.485 "assigned_rate_limits": { 00:23:05.485 "rw_ios_per_sec": 0, 00:23:05.485 "rw_mbytes_per_sec": 0, 00:23:05.485 "r_mbytes_per_sec": 0, 00:23:05.485 "w_mbytes_per_sec": 0 00:23:05.485 }, 00:23:05.485 "claimed": true, 00:23:05.485 "claim_type": "exclusive_write", 00:23:05.485 "zoned": false, 00:23:05.485 "supported_io_types": { 00:23:05.485 "read": true, 00:23:05.485 "write": true, 00:23:05.485 "unmap": true, 00:23:05.485 "flush": true, 00:23:05.485 "reset": true, 00:23:05.485 "nvme_admin": false, 00:23:05.485 "nvme_io": false, 00:23:05.485 "nvme_io_md": false, 00:23:05.485 "write_zeroes": true, 00:23:05.485 "zcopy": true, 00:23:05.485 "get_zone_info": false, 00:23:05.485 "zone_management": false, 00:23:05.485 "zone_append": false, 00:23:05.485 "compare": false, 00:23:05.485 "compare_and_write": false, 00:23:05.485 "abort": true, 00:23:05.485 "seek_hole": false, 00:23:05.485 "seek_data": false, 00:23:05.485 "copy": true, 00:23:05.485 "nvme_iov_md": false 00:23:05.485 }, 00:23:05.485 "memory_domains": [ 00:23:05.485 { 00:23:05.485 "dma_device_id": "system", 00:23:05.485 "dma_device_type": 1 00:23:05.485 }, 00:23:05.485 { 00:23:05.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.485 "dma_device_type": 2 00:23:05.485 } 00:23:05.485 ], 00:23:05.485 "driver_specific": {} 00:23:05.485 } 00:23:05.485 ] 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.485 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.485 "name": "Existed_Raid", 00:23:05.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.485 "strip_size_kb": 64, 00:23:05.485 "state": "configuring", 00:23:05.485 "raid_level": "raid5f", 00:23:05.485 "superblock": false, 00:23:05.485 "num_base_bdevs": 4, 00:23:05.485 "num_base_bdevs_discovered": 2, 00:23:05.485 "num_base_bdevs_operational": 4, 00:23:05.485 "base_bdevs_list": [ 00:23:05.485 { 00:23:05.485 "name": "BaseBdev1", 00:23:05.485 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:05.485 "is_configured": true, 00:23:05.485 "data_offset": 0, 00:23:05.485 "data_size": 65536 00:23:05.485 }, 00:23:05.485 { 00:23:05.485 "name": "BaseBdev2", 00:23:05.485 "uuid": "06f595ef-9270-417d-910f-9bdcc41c2813", 00:23:05.485 "is_configured": true, 00:23:05.485 "data_offset": 0, 00:23:05.485 "data_size": 65536 00:23:05.485 }, 00:23:05.485 { 00:23:05.486 "name": "BaseBdev3", 00:23:05.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.486 "is_configured": false, 00:23:05.486 "data_offset": 0, 00:23:05.486 "data_size": 0 00:23:05.486 }, 00:23:05.486 { 00:23:05.486 "name": "BaseBdev4", 00:23:05.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.486 "is_configured": false, 00:23:05.486 "data_offset": 0, 00:23:05.486 "data_size": 0 00:23:05.486 } 00:23:05.486 ] 00:23:05.486 }' 00:23:05.486 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.486 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.052 [2024-11-06 09:16:04.916752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:06.052 BaseBdev3 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:06.052 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.053 [ 00:23:06.053 { 00:23:06.053 "name": "BaseBdev3", 00:23:06.053 "aliases": [ 00:23:06.053 "9143e426-1d0d-4f7a-b62e-6fb784a93a97" 00:23:06.053 ], 00:23:06.053 "product_name": "Malloc disk", 00:23:06.053 "block_size": 512, 00:23:06.053 "num_blocks": 65536, 00:23:06.053 "uuid": "9143e426-1d0d-4f7a-b62e-6fb784a93a97", 00:23:06.053 "assigned_rate_limits": { 00:23:06.053 "rw_ios_per_sec": 0, 00:23:06.053 "rw_mbytes_per_sec": 0, 00:23:06.053 "r_mbytes_per_sec": 0, 00:23:06.053 "w_mbytes_per_sec": 0 00:23:06.053 }, 00:23:06.053 "claimed": true, 00:23:06.053 "claim_type": "exclusive_write", 00:23:06.053 "zoned": false, 00:23:06.053 "supported_io_types": { 00:23:06.053 "read": true, 00:23:06.053 "write": true, 00:23:06.053 "unmap": true, 00:23:06.053 "flush": true, 00:23:06.053 "reset": true, 00:23:06.053 "nvme_admin": false, 00:23:06.053 "nvme_io": false, 00:23:06.053 "nvme_io_md": false, 00:23:06.053 "write_zeroes": true, 00:23:06.053 "zcopy": true, 00:23:06.053 "get_zone_info": false, 00:23:06.053 "zone_management": false, 00:23:06.053 "zone_append": false, 00:23:06.053 "compare": false, 00:23:06.053 "compare_and_write": false, 00:23:06.053 "abort": true, 00:23:06.053 "seek_hole": false, 00:23:06.053 "seek_data": false, 00:23:06.053 "copy": true, 00:23:06.053 "nvme_iov_md": false 00:23:06.053 }, 00:23:06.053 "memory_domains": [ 00:23:06.053 { 00:23:06.053 "dma_device_id": "system", 00:23:06.053 "dma_device_type": 1 00:23:06.053 }, 00:23:06.053 { 00:23:06.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.053 "dma_device_type": 2 00:23:06.053 } 00:23:06.053 ], 00:23:06.053 "driver_specific": {} 00:23:06.053 } 00:23:06.053 ] 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.053 09:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.053 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.053 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.053 "name": "Existed_Raid", 00:23:06.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.053 "strip_size_kb": 64, 00:23:06.053 "state": "configuring", 00:23:06.053 "raid_level": "raid5f", 00:23:06.053 "superblock": false, 00:23:06.053 "num_base_bdevs": 4, 00:23:06.053 "num_base_bdevs_discovered": 3, 00:23:06.053 "num_base_bdevs_operational": 4, 00:23:06.053 "base_bdevs_list": [ 00:23:06.053 { 00:23:06.053 "name": "BaseBdev1", 00:23:06.053 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:06.053 "is_configured": true, 00:23:06.053 "data_offset": 0, 00:23:06.053 "data_size": 65536 00:23:06.053 }, 00:23:06.053 { 00:23:06.053 "name": "BaseBdev2", 00:23:06.053 "uuid": "06f595ef-9270-417d-910f-9bdcc41c2813", 00:23:06.053 "is_configured": true, 00:23:06.053 "data_offset": 0, 00:23:06.053 "data_size": 65536 00:23:06.053 }, 00:23:06.053 { 00:23:06.053 "name": "BaseBdev3", 00:23:06.053 "uuid": "9143e426-1d0d-4f7a-b62e-6fb784a93a97", 00:23:06.053 "is_configured": true, 00:23:06.053 "data_offset": 0, 00:23:06.053 "data_size": 65536 00:23:06.053 }, 00:23:06.053 { 00:23:06.053 "name": "BaseBdev4", 00:23:06.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.053 "is_configured": false, 00:23:06.053 "data_offset": 0, 00:23:06.053 "data_size": 0 00:23:06.053 } 00:23:06.053 ] 00:23:06.053 }' 00:23:06.053 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.053 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.619 [2024-11-06 09:16:05.461963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:06.619 [2024-11-06 09:16:05.462039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:06.619 [2024-11-06 09:16:05.462051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:06.619 [2024-11-06 09:16:05.462372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:06.619 [2024-11-06 09:16:05.470816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:06.619 [2024-11-06 09:16:05.470848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:06.619 [2024-11-06 09:16:05.471150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.619 BaseBdev4 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:06.619 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.620 [ 00:23:06.620 { 00:23:06.620 "name": "BaseBdev4", 00:23:06.620 "aliases": [ 00:23:06.620 "29547207-8dee-4e3f-836b-82b4a52b5a03" 00:23:06.620 ], 00:23:06.620 "product_name": "Malloc disk", 00:23:06.620 "block_size": 512, 00:23:06.620 "num_blocks": 65536, 00:23:06.620 "uuid": "29547207-8dee-4e3f-836b-82b4a52b5a03", 00:23:06.620 "assigned_rate_limits": { 00:23:06.620 "rw_ios_per_sec": 0, 00:23:06.620 "rw_mbytes_per_sec": 0, 00:23:06.620 "r_mbytes_per_sec": 0, 00:23:06.620 "w_mbytes_per_sec": 0 00:23:06.620 }, 00:23:06.620 "claimed": true, 00:23:06.620 "claim_type": "exclusive_write", 00:23:06.620 "zoned": false, 00:23:06.620 "supported_io_types": { 00:23:06.620 "read": true, 00:23:06.620 "write": true, 00:23:06.620 "unmap": true, 00:23:06.620 "flush": true, 00:23:06.620 "reset": true, 00:23:06.620 "nvme_admin": false, 00:23:06.620 "nvme_io": false, 00:23:06.620 "nvme_io_md": false, 00:23:06.620 "write_zeroes": true, 00:23:06.620 "zcopy": true, 00:23:06.620 "get_zone_info": false, 00:23:06.620 "zone_management": false, 00:23:06.620 "zone_append": false, 00:23:06.620 "compare": false, 00:23:06.620 "compare_and_write": false, 00:23:06.620 "abort": true, 00:23:06.620 "seek_hole": false, 00:23:06.620 "seek_data": false, 00:23:06.620 "copy": true, 00:23:06.620 "nvme_iov_md": false 00:23:06.620 }, 00:23:06.620 "memory_domains": [ 00:23:06.620 { 00:23:06.620 "dma_device_id": "system", 00:23:06.620 "dma_device_type": 1 00:23:06.620 }, 00:23:06.620 { 00:23:06.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.620 "dma_device_type": 2 00:23:06.620 } 00:23:06.620 ], 00:23:06.620 "driver_specific": {} 00:23:06.620 } 00:23:06.620 ] 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.620 "name": "Existed_Raid", 00:23:06.620 "uuid": "b67da37d-3281-4181-810c-9d96939ed322", 00:23:06.620 "strip_size_kb": 64, 00:23:06.620 "state": "online", 00:23:06.620 "raid_level": "raid5f", 00:23:06.620 "superblock": false, 00:23:06.620 "num_base_bdevs": 4, 00:23:06.620 "num_base_bdevs_discovered": 4, 00:23:06.620 "num_base_bdevs_operational": 4, 00:23:06.620 "base_bdevs_list": [ 00:23:06.620 { 00:23:06.620 "name": "BaseBdev1", 00:23:06.620 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:06.620 "is_configured": true, 00:23:06.620 "data_offset": 0, 00:23:06.620 "data_size": 65536 00:23:06.620 }, 00:23:06.620 { 00:23:06.620 "name": "BaseBdev2", 00:23:06.620 "uuid": "06f595ef-9270-417d-910f-9bdcc41c2813", 00:23:06.620 "is_configured": true, 00:23:06.620 "data_offset": 0, 00:23:06.620 "data_size": 65536 00:23:06.620 }, 00:23:06.620 { 00:23:06.620 "name": "BaseBdev3", 00:23:06.620 "uuid": "9143e426-1d0d-4f7a-b62e-6fb784a93a97", 00:23:06.620 "is_configured": true, 00:23:06.620 "data_offset": 0, 00:23:06.620 "data_size": 65536 00:23:06.620 }, 00:23:06.620 { 00:23:06.620 "name": "BaseBdev4", 00:23:06.620 "uuid": "29547207-8dee-4e3f-836b-82b4a52b5a03", 00:23:06.620 "is_configured": true, 00:23:06.620 "data_offset": 0, 00:23:06.620 "data_size": 65536 00:23:06.620 } 00:23:06.620 ] 00:23:06.620 }' 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.620 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.878 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:06.878 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.879 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.137 [2024-11-06 09:16:05.919679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.137 09:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.137 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:07.137 "name": "Existed_Raid", 00:23:07.137 "aliases": [ 00:23:07.137 "b67da37d-3281-4181-810c-9d96939ed322" 00:23:07.137 ], 00:23:07.137 "product_name": "Raid Volume", 00:23:07.137 "block_size": 512, 00:23:07.137 "num_blocks": 196608, 00:23:07.137 "uuid": "b67da37d-3281-4181-810c-9d96939ed322", 00:23:07.137 "assigned_rate_limits": { 00:23:07.137 "rw_ios_per_sec": 0, 00:23:07.137 "rw_mbytes_per_sec": 0, 00:23:07.137 "r_mbytes_per_sec": 0, 00:23:07.137 "w_mbytes_per_sec": 0 00:23:07.137 }, 00:23:07.137 "claimed": false, 00:23:07.137 "zoned": false, 00:23:07.137 "supported_io_types": { 00:23:07.137 "read": true, 00:23:07.137 "write": true, 00:23:07.137 "unmap": false, 00:23:07.137 "flush": false, 00:23:07.137 "reset": true, 00:23:07.137 "nvme_admin": false, 00:23:07.137 "nvme_io": false, 00:23:07.137 "nvme_io_md": false, 00:23:07.137 "write_zeroes": true, 00:23:07.137 "zcopy": false, 00:23:07.137 "get_zone_info": false, 00:23:07.137 "zone_management": false, 00:23:07.137 "zone_append": false, 00:23:07.137 "compare": false, 00:23:07.137 "compare_and_write": false, 00:23:07.137 "abort": false, 00:23:07.138 "seek_hole": false, 00:23:07.138 "seek_data": false, 00:23:07.138 "copy": false, 00:23:07.138 "nvme_iov_md": false 00:23:07.138 }, 00:23:07.138 "driver_specific": { 00:23:07.138 "raid": { 00:23:07.138 "uuid": "b67da37d-3281-4181-810c-9d96939ed322", 00:23:07.138 "strip_size_kb": 64, 00:23:07.138 "state": "online", 00:23:07.138 "raid_level": "raid5f", 00:23:07.138 "superblock": false, 00:23:07.138 "num_base_bdevs": 4, 00:23:07.138 "num_base_bdevs_discovered": 4, 00:23:07.138 "num_base_bdevs_operational": 4, 00:23:07.138 "base_bdevs_list": [ 00:23:07.138 { 00:23:07.138 "name": "BaseBdev1", 00:23:07.138 "uuid": "65979bcc-9991-454e-b1dc-41bc3c90742c", 00:23:07.138 "is_configured": true, 00:23:07.138 "data_offset": 0, 00:23:07.138 "data_size": 65536 00:23:07.138 }, 00:23:07.138 { 00:23:07.138 "name": "BaseBdev2", 00:23:07.138 "uuid": "06f595ef-9270-417d-910f-9bdcc41c2813", 00:23:07.138 "is_configured": true, 00:23:07.138 "data_offset": 0, 00:23:07.138 "data_size": 65536 00:23:07.138 }, 00:23:07.138 { 00:23:07.138 "name": "BaseBdev3", 00:23:07.138 "uuid": "9143e426-1d0d-4f7a-b62e-6fb784a93a97", 00:23:07.138 "is_configured": true, 00:23:07.138 "data_offset": 0, 00:23:07.138 "data_size": 65536 00:23:07.138 }, 00:23:07.138 { 00:23:07.138 "name": "BaseBdev4", 00:23:07.138 "uuid": "29547207-8dee-4e3f-836b-82b4a52b5a03", 00:23:07.138 "is_configured": true, 00:23:07.138 "data_offset": 0, 00:23:07.138 "data_size": 65536 00:23:07.138 } 00:23:07.138 ] 00:23:07.138 } 00:23:07.138 } 00:23:07.138 }' 00:23:07.138 09:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:07.138 BaseBdev2 00:23:07.138 BaseBdev3 00:23:07.138 BaseBdev4' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.138 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.396 [2024-11-06 09:16:06.235098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.396 "name": "Existed_Raid", 00:23:07.396 "uuid": "b67da37d-3281-4181-810c-9d96939ed322", 00:23:07.396 "strip_size_kb": 64, 00:23:07.396 "state": "online", 00:23:07.396 "raid_level": "raid5f", 00:23:07.396 "superblock": false, 00:23:07.396 "num_base_bdevs": 4, 00:23:07.396 "num_base_bdevs_discovered": 3, 00:23:07.396 "num_base_bdevs_operational": 3, 00:23:07.396 "base_bdevs_list": [ 00:23:07.396 { 00:23:07.396 "name": null, 00:23:07.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.396 "is_configured": false, 00:23:07.396 "data_offset": 0, 00:23:07.396 "data_size": 65536 00:23:07.396 }, 00:23:07.396 { 00:23:07.396 "name": "BaseBdev2", 00:23:07.396 "uuid": "06f595ef-9270-417d-910f-9bdcc41c2813", 00:23:07.396 "is_configured": true, 00:23:07.396 "data_offset": 0, 00:23:07.396 "data_size": 65536 00:23:07.396 }, 00:23:07.396 { 00:23:07.396 "name": "BaseBdev3", 00:23:07.396 "uuid": "9143e426-1d0d-4f7a-b62e-6fb784a93a97", 00:23:07.396 "is_configured": true, 00:23:07.396 "data_offset": 0, 00:23:07.396 "data_size": 65536 00:23:07.396 }, 00:23:07.396 { 00:23:07.396 "name": "BaseBdev4", 00:23:07.396 "uuid": "29547207-8dee-4e3f-836b-82b4a52b5a03", 00:23:07.396 "is_configured": true, 00:23:07.396 "data_offset": 0, 00:23:07.396 "data_size": 65536 00:23:07.396 } 00:23:07.396 ] 00:23:07.396 }' 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.396 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.963 [2024-11-06 09:16:06.862459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:07.963 [2024-11-06 09:16:06.862570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.963 [2024-11-06 09:16:06.965778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.963 09:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.221 [2024-11-06 09:16:07.021740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.221 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.221 [2024-11-06 09:16:07.178739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:08.221 [2024-11-06 09:16:07.178799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 BaseBdev2 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 [ 00:23:08.479 { 00:23:08.479 "name": "BaseBdev2", 00:23:08.479 "aliases": [ 00:23:08.479 "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2" 00:23:08.479 ], 00:23:08.479 "product_name": "Malloc disk", 00:23:08.479 "block_size": 512, 00:23:08.479 "num_blocks": 65536, 00:23:08.479 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:08.479 "assigned_rate_limits": { 00:23:08.479 "rw_ios_per_sec": 0, 00:23:08.479 "rw_mbytes_per_sec": 0, 00:23:08.479 "r_mbytes_per_sec": 0, 00:23:08.479 "w_mbytes_per_sec": 0 00:23:08.479 }, 00:23:08.479 "claimed": false, 00:23:08.479 "zoned": false, 00:23:08.479 "supported_io_types": { 00:23:08.479 "read": true, 00:23:08.479 "write": true, 00:23:08.479 "unmap": true, 00:23:08.479 "flush": true, 00:23:08.479 "reset": true, 00:23:08.479 "nvme_admin": false, 00:23:08.479 "nvme_io": false, 00:23:08.479 "nvme_io_md": false, 00:23:08.479 "write_zeroes": true, 00:23:08.479 "zcopy": true, 00:23:08.479 "get_zone_info": false, 00:23:08.479 "zone_management": false, 00:23:08.479 "zone_append": false, 00:23:08.479 "compare": false, 00:23:08.479 "compare_and_write": false, 00:23:08.479 "abort": true, 00:23:08.479 "seek_hole": false, 00:23:08.479 "seek_data": false, 00:23:08.479 "copy": true, 00:23:08.479 "nvme_iov_md": false 00:23:08.479 }, 00:23:08.479 "memory_domains": [ 00:23:08.479 { 00:23:08.479 "dma_device_id": "system", 00:23:08.479 "dma_device_type": 1 00:23:08.479 }, 00:23:08.479 { 00:23:08.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.479 "dma_device_type": 2 00:23:08.479 } 00:23:08.479 ], 00:23:08.479 "driver_specific": {} 00:23:08.479 } 00:23:08.479 ] 00:23:08.479 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.480 BaseBdev3 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.480 [ 00:23:08.480 { 00:23:08.480 "name": "BaseBdev3", 00:23:08.480 "aliases": [ 00:23:08.480 "824f0002-cc62-4ed1-bca0-6a74c7efcfe6" 00:23:08.480 ], 00:23:08.480 "product_name": "Malloc disk", 00:23:08.480 "block_size": 512, 00:23:08.480 "num_blocks": 65536, 00:23:08.480 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:08.480 "assigned_rate_limits": { 00:23:08.480 "rw_ios_per_sec": 0, 00:23:08.480 "rw_mbytes_per_sec": 0, 00:23:08.480 "r_mbytes_per_sec": 0, 00:23:08.480 "w_mbytes_per_sec": 0 00:23:08.480 }, 00:23:08.480 "claimed": false, 00:23:08.480 "zoned": false, 00:23:08.480 "supported_io_types": { 00:23:08.480 "read": true, 00:23:08.480 "write": true, 00:23:08.480 "unmap": true, 00:23:08.480 "flush": true, 00:23:08.480 "reset": true, 00:23:08.480 "nvme_admin": false, 00:23:08.480 "nvme_io": false, 00:23:08.480 "nvme_io_md": false, 00:23:08.480 "write_zeroes": true, 00:23:08.480 "zcopy": true, 00:23:08.480 "get_zone_info": false, 00:23:08.480 "zone_management": false, 00:23:08.480 "zone_append": false, 00:23:08.480 "compare": false, 00:23:08.480 "compare_and_write": false, 00:23:08.480 "abort": true, 00:23:08.480 "seek_hole": false, 00:23:08.480 "seek_data": false, 00:23:08.480 "copy": true, 00:23:08.480 "nvme_iov_md": false 00:23:08.480 }, 00:23:08.480 "memory_domains": [ 00:23:08.480 { 00:23:08.480 "dma_device_id": "system", 00:23:08.480 "dma_device_type": 1 00:23:08.480 }, 00:23:08.480 { 00:23:08.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.480 "dma_device_type": 2 00:23:08.480 } 00:23:08.480 ], 00:23:08.480 "driver_specific": {} 00:23:08.480 } 00:23:08.480 ] 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.480 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.739 BaseBdev4 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.739 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.739 [ 00:23:08.739 { 00:23:08.739 "name": "BaseBdev4", 00:23:08.739 "aliases": [ 00:23:08.739 "4553f082-64d6-4fc4-b35d-23d1bd0be2d1" 00:23:08.739 ], 00:23:08.739 "product_name": "Malloc disk", 00:23:08.739 "block_size": 512, 00:23:08.739 "num_blocks": 65536, 00:23:08.739 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:08.739 "assigned_rate_limits": { 00:23:08.739 "rw_ios_per_sec": 0, 00:23:08.739 "rw_mbytes_per_sec": 0, 00:23:08.739 "r_mbytes_per_sec": 0, 00:23:08.739 "w_mbytes_per_sec": 0 00:23:08.739 }, 00:23:08.739 "claimed": false, 00:23:08.739 "zoned": false, 00:23:08.739 "supported_io_types": { 00:23:08.739 "read": true, 00:23:08.739 "write": true, 00:23:08.739 "unmap": true, 00:23:08.739 "flush": true, 00:23:08.739 "reset": true, 00:23:08.739 "nvme_admin": false, 00:23:08.739 "nvme_io": false, 00:23:08.739 "nvme_io_md": false, 00:23:08.739 "write_zeroes": true, 00:23:08.739 "zcopy": true, 00:23:08.739 "get_zone_info": false, 00:23:08.739 "zone_management": false, 00:23:08.739 "zone_append": false, 00:23:08.739 "compare": false, 00:23:08.739 "compare_and_write": false, 00:23:08.739 "abort": true, 00:23:08.739 "seek_hole": false, 00:23:08.739 "seek_data": false, 00:23:08.739 "copy": true, 00:23:08.739 "nvme_iov_md": false 00:23:08.739 }, 00:23:08.739 "memory_domains": [ 00:23:08.739 { 00:23:08.739 "dma_device_id": "system", 00:23:08.739 "dma_device_type": 1 00:23:08.739 }, 00:23:08.739 { 00:23:08.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.739 "dma_device_type": 2 00:23:08.739 } 00:23:08.739 ], 00:23:08.740 "driver_specific": {} 00:23:08.740 } 00:23:08.740 ] 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.740 [2024-11-06 09:16:07.601167] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:08.740 [2024-11-06 09:16:07.601220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:08.740 [2024-11-06 09:16:07.601248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.740 [2024-11-06 09:16:07.603525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.740 [2024-11-06 09:16:07.603580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.740 "name": "Existed_Raid", 00:23:08.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.740 "strip_size_kb": 64, 00:23:08.740 "state": "configuring", 00:23:08.740 "raid_level": "raid5f", 00:23:08.740 "superblock": false, 00:23:08.740 "num_base_bdevs": 4, 00:23:08.740 "num_base_bdevs_discovered": 3, 00:23:08.740 "num_base_bdevs_operational": 4, 00:23:08.740 "base_bdevs_list": [ 00:23:08.740 { 00:23:08.740 "name": "BaseBdev1", 00:23:08.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.740 "is_configured": false, 00:23:08.740 "data_offset": 0, 00:23:08.740 "data_size": 0 00:23:08.740 }, 00:23:08.740 { 00:23:08.740 "name": "BaseBdev2", 00:23:08.740 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:08.740 "is_configured": true, 00:23:08.740 "data_offset": 0, 00:23:08.740 "data_size": 65536 00:23:08.740 }, 00:23:08.740 { 00:23:08.740 "name": "BaseBdev3", 00:23:08.740 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:08.740 "is_configured": true, 00:23:08.740 "data_offset": 0, 00:23:08.740 "data_size": 65536 00:23:08.740 }, 00:23:08.740 { 00:23:08.740 "name": "BaseBdev4", 00:23:08.740 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:08.740 "is_configured": true, 00:23:08.740 "data_offset": 0, 00:23:08.740 "data_size": 65536 00:23:08.740 } 00:23:08.740 ] 00:23:08.740 }' 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.740 09:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.307 [2024-11-06 09:16:08.076504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.307 "name": "Existed_Raid", 00:23:09.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.307 "strip_size_kb": 64, 00:23:09.307 "state": "configuring", 00:23:09.307 "raid_level": "raid5f", 00:23:09.307 "superblock": false, 00:23:09.307 "num_base_bdevs": 4, 00:23:09.307 "num_base_bdevs_discovered": 2, 00:23:09.307 "num_base_bdevs_operational": 4, 00:23:09.307 "base_bdevs_list": [ 00:23:09.307 { 00:23:09.307 "name": "BaseBdev1", 00:23:09.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.307 "is_configured": false, 00:23:09.307 "data_offset": 0, 00:23:09.307 "data_size": 0 00:23:09.307 }, 00:23:09.307 { 00:23:09.307 "name": null, 00:23:09.307 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:09.307 "is_configured": false, 00:23:09.307 "data_offset": 0, 00:23:09.307 "data_size": 65536 00:23:09.307 }, 00:23:09.307 { 00:23:09.307 "name": "BaseBdev3", 00:23:09.307 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:09.307 "is_configured": true, 00:23:09.307 "data_offset": 0, 00:23:09.307 "data_size": 65536 00:23:09.307 }, 00:23:09.307 { 00:23:09.307 "name": "BaseBdev4", 00:23:09.307 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:09.307 "is_configured": true, 00:23:09.307 "data_offset": 0, 00:23:09.307 "data_size": 65536 00:23:09.307 } 00:23:09.307 ] 00:23:09.307 }' 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.307 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.566 [2024-11-06 09:16:08.597867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.566 BaseBdev1 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.566 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.825 [ 00:23:09.825 { 00:23:09.825 "name": "BaseBdev1", 00:23:09.825 "aliases": [ 00:23:09.825 "9715a0b3-44a6-46a8-bf24-324cd78ccb13" 00:23:09.825 ], 00:23:09.825 "product_name": "Malloc disk", 00:23:09.825 "block_size": 512, 00:23:09.825 "num_blocks": 65536, 00:23:09.825 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:09.825 "assigned_rate_limits": { 00:23:09.825 "rw_ios_per_sec": 0, 00:23:09.825 "rw_mbytes_per_sec": 0, 00:23:09.825 "r_mbytes_per_sec": 0, 00:23:09.825 "w_mbytes_per_sec": 0 00:23:09.825 }, 00:23:09.825 "claimed": true, 00:23:09.825 "claim_type": "exclusive_write", 00:23:09.825 "zoned": false, 00:23:09.825 "supported_io_types": { 00:23:09.825 "read": true, 00:23:09.825 "write": true, 00:23:09.825 "unmap": true, 00:23:09.825 "flush": true, 00:23:09.825 "reset": true, 00:23:09.825 "nvme_admin": false, 00:23:09.825 "nvme_io": false, 00:23:09.825 "nvme_io_md": false, 00:23:09.825 "write_zeroes": true, 00:23:09.825 "zcopy": true, 00:23:09.825 "get_zone_info": false, 00:23:09.825 "zone_management": false, 00:23:09.825 "zone_append": false, 00:23:09.825 "compare": false, 00:23:09.825 "compare_and_write": false, 00:23:09.825 "abort": true, 00:23:09.825 "seek_hole": false, 00:23:09.825 "seek_data": false, 00:23:09.825 "copy": true, 00:23:09.825 "nvme_iov_md": false 00:23:09.825 }, 00:23:09.825 "memory_domains": [ 00:23:09.825 { 00:23:09.825 "dma_device_id": "system", 00:23:09.825 "dma_device_type": 1 00:23:09.825 }, 00:23:09.825 { 00:23:09.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.825 "dma_device_type": 2 00:23:09.825 } 00:23:09.825 ], 00:23:09.825 "driver_specific": {} 00:23:09.825 } 00:23:09.825 ] 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.825 "name": "Existed_Raid", 00:23:09.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.825 "strip_size_kb": 64, 00:23:09.825 "state": "configuring", 00:23:09.825 "raid_level": "raid5f", 00:23:09.825 "superblock": false, 00:23:09.825 "num_base_bdevs": 4, 00:23:09.825 "num_base_bdevs_discovered": 3, 00:23:09.825 "num_base_bdevs_operational": 4, 00:23:09.825 "base_bdevs_list": [ 00:23:09.825 { 00:23:09.825 "name": "BaseBdev1", 00:23:09.825 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:09.825 "is_configured": true, 00:23:09.825 "data_offset": 0, 00:23:09.825 "data_size": 65536 00:23:09.825 }, 00:23:09.825 { 00:23:09.825 "name": null, 00:23:09.825 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:09.825 "is_configured": false, 00:23:09.825 "data_offset": 0, 00:23:09.825 "data_size": 65536 00:23:09.825 }, 00:23:09.825 { 00:23:09.825 "name": "BaseBdev3", 00:23:09.825 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:09.825 "is_configured": true, 00:23:09.825 "data_offset": 0, 00:23:09.825 "data_size": 65536 00:23:09.825 }, 00:23:09.825 { 00:23:09.825 "name": "BaseBdev4", 00:23:09.825 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:09.825 "is_configured": true, 00:23:09.825 "data_offset": 0, 00:23:09.825 "data_size": 65536 00:23:09.825 } 00:23:09.825 ] 00:23:09.825 }' 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.825 09:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.084 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.084 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.084 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:10.084 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.084 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.344 [2024-11-06 09:16:09.149226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.344 "name": "Existed_Raid", 00:23:10.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.344 "strip_size_kb": 64, 00:23:10.344 "state": "configuring", 00:23:10.344 "raid_level": "raid5f", 00:23:10.344 "superblock": false, 00:23:10.344 "num_base_bdevs": 4, 00:23:10.344 "num_base_bdevs_discovered": 2, 00:23:10.344 "num_base_bdevs_operational": 4, 00:23:10.344 "base_bdevs_list": [ 00:23:10.344 { 00:23:10.344 "name": "BaseBdev1", 00:23:10.344 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:10.344 "is_configured": true, 00:23:10.344 "data_offset": 0, 00:23:10.344 "data_size": 65536 00:23:10.344 }, 00:23:10.344 { 00:23:10.344 "name": null, 00:23:10.344 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:10.344 "is_configured": false, 00:23:10.344 "data_offset": 0, 00:23:10.344 "data_size": 65536 00:23:10.344 }, 00:23:10.344 { 00:23:10.344 "name": null, 00:23:10.344 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:10.344 "is_configured": false, 00:23:10.344 "data_offset": 0, 00:23:10.344 "data_size": 65536 00:23:10.344 }, 00:23:10.344 { 00:23:10.344 "name": "BaseBdev4", 00:23:10.344 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:10.344 "is_configured": true, 00:23:10.344 "data_offset": 0, 00:23:10.344 "data_size": 65536 00:23:10.344 } 00:23:10.344 ] 00:23:10.344 }' 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.344 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.603 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.603 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.603 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:10.603 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.866 [2024-11-06 09:16:09.656893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.866 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.866 "name": "Existed_Raid", 00:23:10.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.866 "strip_size_kb": 64, 00:23:10.866 "state": "configuring", 00:23:10.866 "raid_level": "raid5f", 00:23:10.866 "superblock": false, 00:23:10.867 "num_base_bdevs": 4, 00:23:10.867 "num_base_bdevs_discovered": 3, 00:23:10.867 "num_base_bdevs_operational": 4, 00:23:10.867 "base_bdevs_list": [ 00:23:10.867 { 00:23:10.867 "name": "BaseBdev1", 00:23:10.867 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:10.867 "is_configured": true, 00:23:10.867 "data_offset": 0, 00:23:10.867 "data_size": 65536 00:23:10.867 }, 00:23:10.867 { 00:23:10.867 "name": null, 00:23:10.867 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:10.867 "is_configured": false, 00:23:10.867 "data_offset": 0, 00:23:10.867 "data_size": 65536 00:23:10.867 }, 00:23:10.867 { 00:23:10.867 "name": "BaseBdev3", 00:23:10.867 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:10.867 "is_configured": true, 00:23:10.867 "data_offset": 0, 00:23:10.867 "data_size": 65536 00:23:10.867 }, 00:23:10.867 { 00:23:10.867 "name": "BaseBdev4", 00:23:10.867 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:10.867 "is_configured": true, 00:23:10.867 "data_offset": 0, 00:23:10.867 "data_size": 65536 00:23:10.867 } 00:23:10.867 ] 00:23:10.867 }' 00:23:10.867 09:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.867 09:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.126 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.126 [2024-11-06 09:16:10.124316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.400 "name": "Existed_Raid", 00:23:11.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.400 "strip_size_kb": 64, 00:23:11.400 "state": "configuring", 00:23:11.400 "raid_level": "raid5f", 00:23:11.400 "superblock": false, 00:23:11.400 "num_base_bdevs": 4, 00:23:11.400 "num_base_bdevs_discovered": 2, 00:23:11.400 "num_base_bdevs_operational": 4, 00:23:11.400 "base_bdevs_list": [ 00:23:11.400 { 00:23:11.400 "name": null, 00:23:11.400 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:11.400 "is_configured": false, 00:23:11.400 "data_offset": 0, 00:23:11.400 "data_size": 65536 00:23:11.400 }, 00:23:11.400 { 00:23:11.400 "name": null, 00:23:11.400 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:11.400 "is_configured": false, 00:23:11.400 "data_offset": 0, 00:23:11.400 "data_size": 65536 00:23:11.400 }, 00:23:11.400 { 00:23:11.400 "name": "BaseBdev3", 00:23:11.400 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:11.400 "is_configured": true, 00:23:11.400 "data_offset": 0, 00:23:11.400 "data_size": 65536 00:23:11.400 }, 00:23:11.400 { 00:23:11.400 "name": "BaseBdev4", 00:23:11.400 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:11.400 "is_configured": true, 00:23:11.400 "data_offset": 0, 00:23:11.400 "data_size": 65536 00:23:11.400 } 00:23:11.400 ] 00:23:11.400 }' 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.400 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.671 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.929 [2024-11-06 09:16:10.713670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.929 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.930 "name": "Existed_Raid", 00:23:11.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.930 "strip_size_kb": 64, 00:23:11.930 "state": "configuring", 00:23:11.930 "raid_level": "raid5f", 00:23:11.930 "superblock": false, 00:23:11.930 "num_base_bdevs": 4, 00:23:11.930 "num_base_bdevs_discovered": 3, 00:23:11.930 "num_base_bdevs_operational": 4, 00:23:11.930 "base_bdevs_list": [ 00:23:11.930 { 00:23:11.930 "name": null, 00:23:11.930 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:11.930 "is_configured": false, 00:23:11.930 "data_offset": 0, 00:23:11.930 "data_size": 65536 00:23:11.930 }, 00:23:11.930 { 00:23:11.930 "name": "BaseBdev2", 00:23:11.930 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:11.930 "is_configured": true, 00:23:11.930 "data_offset": 0, 00:23:11.930 "data_size": 65536 00:23:11.930 }, 00:23:11.930 { 00:23:11.930 "name": "BaseBdev3", 00:23:11.930 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:11.930 "is_configured": true, 00:23:11.930 "data_offset": 0, 00:23:11.930 "data_size": 65536 00:23:11.930 }, 00:23:11.930 { 00:23:11.930 "name": "BaseBdev4", 00:23:11.930 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:11.930 "is_configured": true, 00:23:11.930 "data_offset": 0, 00:23:11.930 "data_size": 65536 00:23:11.930 } 00:23:11.930 ] 00:23:11.930 }' 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.930 09:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9715a0b3-44a6-46a8-bf24-324cd78ccb13 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.188 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.446 [2024-11-06 09:16:11.262976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:12.446 [2024-11-06 09:16:11.263043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:12.446 [2024-11-06 09:16:11.263053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:12.446 [2024-11-06 09:16:11.263343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:12.446 [2024-11-06 09:16:11.270157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:12.446 [2024-11-06 09:16:11.270199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:12.446 [2024-11-06 09:16:11.270493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.446 NewBaseBdev 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.446 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.446 [ 00:23:12.446 { 00:23:12.446 "name": "NewBaseBdev", 00:23:12.446 "aliases": [ 00:23:12.446 "9715a0b3-44a6-46a8-bf24-324cd78ccb13" 00:23:12.446 ], 00:23:12.446 "product_name": "Malloc disk", 00:23:12.446 "block_size": 512, 00:23:12.446 "num_blocks": 65536, 00:23:12.446 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:12.446 "assigned_rate_limits": { 00:23:12.446 "rw_ios_per_sec": 0, 00:23:12.446 "rw_mbytes_per_sec": 0, 00:23:12.446 "r_mbytes_per_sec": 0, 00:23:12.446 "w_mbytes_per_sec": 0 00:23:12.446 }, 00:23:12.446 "claimed": true, 00:23:12.446 "claim_type": "exclusive_write", 00:23:12.446 "zoned": false, 00:23:12.446 "supported_io_types": { 00:23:12.446 "read": true, 00:23:12.446 "write": true, 00:23:12.446 "unmap": true, 00:23:12.446 "flush": true, 00:23:12.446 "reset": true, 00:23:12.446 "nvme_admin": false, 00:23:12.446 "nvme_io": false, 00:23:12.446 "nvme_io_md": false, 00:23:12.447 "write_zeroes": true, 00:23:12.447 "zcopy": true, 00:23:12.447 "get_zone_info": false, 00:23:12.447 "zone_management": false, 00:23:12.447 "zone_append": false, 00:23:12.447 "compare": false, 00:23:12.447 "compare_and_write": false, 00:23:12.447 "abort": true, 00:23:12.447 "seek_hole": false, 00:23:12.447 "seek_data": false, 00:23:12.447 "copy": true, 00:23:12.447 "nvme_iov_md": false 00:23:12.447 }, 00:23:12.447 "memory_domains": [ 00:23:12.447 { 00:23:12.447 "dma_device_id": "system", 00:23:12.447 "dma_device_type": 1 00:23:12.447 }, 00:23:12.447 { 00:23:12.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.447 "dma_device_type": 2 00:23:12.447 } 00:23:12.447 ], 00:23:12.447 "driver_specific": {} 00:23:12.447 } 00:23:12.447 ] 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.447 "name": "Existed_Raid", 00:23:12.447 "uuid": "780942a2-4048-4aaf-a806-847b72589e44", 00:23:12.447 "strip_size_kb": 64, 00:23:12.447 "state": "online", 00:23:12.447 "raid_level": "raid5f", 00:23:12.447 "superblock": false, 00:23:12.447 "num_base_bdevs": 4, 00:23:12.447 "num_base_bdevs_discovered": 4, 00:23:12.447 "num_base_bdevs_operational": 4, 00:23:12.447 "base_bdevs_list": [ 00:23:12.447 { 00:23:12.447 "name": "NewBaseBdev", 00:23:12.447 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:12.447 "is_configured": true, 00:23:12.447 "data_offset": 0, 00:23:12.447 "data_size": 65536 00:23:12.447 }, 00:23:12.447 { 00:23:12.447 "name": "BaseBdev2", 00:23:12.447 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:12.447 "is_configured": true, 00:23:12.447 "data_offset": 0, 00:23:12.447 "data_size": 65536 00:23:12.447 }, 00:23:12.447 { 00:23:12.447 "name": "BaseBdev3", 00:23:12.447 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:12.447 "is_configured": true, 00:23:12.447 "data_offset": 0, 00:23:12.447 "data_size": 65536 00:23:12.447 }, 00:23:12.447 { 00:23:12.447 "name": "BaseBdev4", 00:23:12.447 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:12.447 "is_configured": true, 00:23:12.447 "data_offset": 0, 00:23:12.447 "data_size": 65536 00:23:12.447 } 00:23:12.447 ] 00:23:12.447 }' 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.447 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.014 [2024-11-06 09:16:11.786583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.014 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:13.014 "name": "Existed_Raid", 00:23:13.014 "aliases": [ 00:23:13.014 "780942a2-4048-4aaf-a806-847b72589e44" 00:23:13.014 ], 00:23:13.014 "product_name": "Raid Volume", 00:23:13.014 "block_size": 512, 00:23:13.014 "num_blocks": 196608, 00:23:13.014 "uuid": "780942a2-4048-4aaf-a806-847b72589e44", 00:23:13.014 "assigned_rate_limits": { 00:23:13.014 "rw_ios_per_sec": 0, 00:23:13.014 "rw_mbytes_per_sec": 0, 00:23:13.014 "r_mbytes_per_sec": 0, 00:23:13.014 "w_mbytes_per_sec": 0 00:23:13.014 }, 00:23:13.014 "claimed": false, 00:23:13.014 "zoned": false, 00:23:13.014 "supported_io_types": { 00:23:13.014 "read": true, 00:23:13.014 "write": true, 00:23:13.014 "unmap": false, 00:23:13.014 "flush": false, 00:23:13.014 "reset": true, 00:23:13.014 "nvme_admin": false, 00:23:13.014 "nvme_io": false, 00:23:13.014 "nvme_io_md": false, 00:23:13.014 "write_zeroes": true, 00:23:13.014 "zcopy": false, 00:23:13.014 "get_zone_info": false, 00:23:13.014 "zone_management": false, 00:23:13.014 "zone_append": false, 00:23:13.014 "compare": false, 00:23:13.014 "compare_and_write": false, 00:23:13.014 "abort": false, 00:23:13.014 "seek_hole": false, 00:23:13.014 "seek_data": false, 00:23:13.014 "copy": false, 00:23:13.014 "nvme_iov_md": false 00:23:13.014 }, 00:23:13.014 "driver_specific": { 00:23:13.014 "raid": { 00:23:13.014 "uuid": "780942a2-4048-4aaf-a806-847b72589e44", 00:23:13.014 "strip_size_kb": 64, 00:23:13.015 "state": "online", 00:23:13.015 "raid_level": "raid5f", 00:23:13.015 "superblock": false, 00:23:13.015 "num_base_bdevs": 4, 00:23:13.015 "num_base_bdevs_discovered": 4, 00:23:13.015 "num_base_bdevs_operational": 4, 00:23:13.015 "base_bdevs_list": [ 00:23:13.015 { 00:23:13.015 "name": "NewBaseBdev", 00:23:13.015 "uuid": "9715a0b3-44a6-46a8-bf24-324cd78ccb13", 00:23:13.015 "is_configured": true, 00:23:13.015 "data_offset": 0, 00:23:13.015 "data_size": 65536 00:23:13.015 }, 00:23:13.015 { 00:23:13.015 "name": "BaseBdev2", 00:23:13.015 "uuid": "7f6be70d-1102-4ebf-ad52-986a7ec3e3c2", 00:23:13.015 "is_configured": true, 00:23:13.015 "data_offset": 0, 00:23:13.015 "data_size": 65536 00:23:13.015 }, 00:23:13.015 { 00:23:13.015 "name": "BaseBdev3", 00:23:13.015 "uuid": "824f0002-cc62-4ed1-bca0-6a74c7efcfe6", 00:23:13.015 "is_configured": true, 00:23:13.015 "data_offset": 0, 00:23:13.015 "data_size": 65536 00:23:13.015 }, 00:23:13.015 { 00:23:13.015 "name": "BaseBdev4", 00:23:13.015 "uuid": "4553f082-64d6-4fc4-b35d-23d1bd0be2d1", 00:23:13.015 "is_configured": true, 00:23:13.015 "data_offset": 0, 00:23:13.015 "data_size": 65536 00:23:13.015 } 00:23:13.015 ] 00:23:13.015 } 00:23:13.015 } 00:23:13.015 }' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:13.015 BaseBdev2 00:23:13.015 BaseBdev3 00:23:13.015 BaseBdev4' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.015 09:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.015 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.274 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.274 [2024-11-06 09:16:12.114333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:13.274 [2024-11-06 09:16:12.114367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.274 [2024-11-06 09:16:12.114449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.274 [2024-11-06 09:16:12.114763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.275 [2024-11-06 09:16:12.114777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82472 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 82472 ']' 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 82472 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82472 00:23:13.275 killing process with pid 82472 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82472' 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 82472 00:23:13.275 [2024-11-06 09:16:12.165563] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.275 09:16:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 82472 00:23:13.841 [2024-11-06 09:16:12.596631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.776 09:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:14.776 00:23:14.776 real 0m12.035s 00:23:14.776 user 0m19.058s 00:23:14.776 sys 0m2.534s 00:23:14.776 ************************************ 00:23:14.776 END TEST raid5f_state_function_test 00:23:14.776 ************************************ 00:23:14.776 09:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.776 09:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.034 09:16:13 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:15.034 09:16:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:15.034 09:16:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.034 09:16:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.034 ************************************ 00:23:15.034 START TEST raid5f_state_function_test_sb 00:23:15.034 ************************************ 00:23:15.034 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:23:15.034 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:15.034 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:15.034 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83148 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83148' 00:23:15.035 Process raid pid: 83148 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83148 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83148 ']' 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.035 09:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.035 [2024-11-06 09:16:13.999050] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:23:15.035 [2024-11-06 09:16:13.999408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.294 [2024-11-06 09:16:14.185667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.294 [2024-11-06 09:16:14.313888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.552 [2024-11-06 09:16:14.538819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.552 [2024-11-06 09:16:14.538869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.120 [2024-11-06 09:16:14.869510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:16.120 [2024-11-06 09:16:14.869565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:16.120 [2024-11-06 09:16:14.869578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.120 [2024-11-06 09:16:14.869591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.120 [2024-11-06 09:16:14.869600] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:16.120 [2024-11-06 09:16:14.869617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:16.120 [2024-11-06 09:16:14.869631] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:16.120 [2024-11-06 09:16:14.869644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.120 "name": "Existed_Raid", 00:23:16.120 "uuid": "c7bfec3e-197c-4808-80d0-4521f210ff62", 00:23:16.120 "strip_size_kb": 64, 00:23:16.120 "state": "configuring", 00:23:16.120 "raid_level": "raid5f", 00:23:16.120 "superblock": true, 00:23:16.120 "num_base_bdevs": 4, 00:23:16.120 "num_base_bdevs_discovered": 0, 00:23:16.120 "num_base_bdevs_operational": 4, 00:23:16.120 "base_bdevs_list": [ 00:23:16.120 { 00:23:16.120 "name": "BaseBdev1", 00:23:16.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.120 "is_configured": false, 00:23:16.120 "data_offset": 0, 00:23:16.120 "data_size": 0 00:23:16.120 }, 00:23:16.120 { 00:23:16.120 "name": "BaseBdev2", 00:23:16.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.120 "is_configured": false, 00:23:16.120 "data_offset": 0, 00:23:16.120 "data_size": 0 00:23:16.120 }, 00:23:16.120 { 00:23:16.120 "name": "BaseBdev3", 00:23:16.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.120 "is_configured": false, 00:23:16.120 "data_offset": 0, 00:23:16.120 "data_size": 0 00:23:16.120 }, 00:23:16.120 { 00:23:16.120 "name": "BaseBdev4", 00:23:16.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.120 "is_configured": false, 00:23:16.120 "data_offset": 0, 00:23:16.120 "data_size": 0 00:23:16.120 } 00:23:16.120 ] 00:23:16.120 }' 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.120 09:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 [2024-11-06 09:16:15.308846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:16.379 [2024-11-06 09:16:15.308900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 [2024-11-06 09:16:15.320839] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:16.379 [2024-11-06 09:16:15.320885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:16.379 [2024-11-06 09:16:15.320912] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.379 [2024-11-06 09:16:15.320925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.379 [2024-11-06 09:16:15.320934] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:16.379 [2024-11-06 09:16:15.320947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:16.379 [2024-11-06 09:16:15.320955] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:16.379 [2024-11-06 09:16:15.320967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 [2024-11-06 09:16:15.371810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.379 BaseBdev1 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 [ 00:23:16.379 { 00:23:16.379 "name": "BaseBdev1", 00:23:16.379 "aliases": [ 00:23:16.379 "41699241-8a2a-48fc-9537-3a2406f05778" 00:23:16.379 ], 00:23:16.379 "product_name": "Malloc disk", 00:23:16.379 "block_size": 512, 00:23:16.379 "num_blocks": 65536, 00:23:16.379 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:16.379 "assigned_rate_limits": { 00:23:16.379 "rw_ios_per_sec": 0, 00:23:16.379 "rw_mbytes_per_sec": 0, 00:23:16.379 "r_mbytes_per_sec": 0, 00:23:16.379 "w_mbytes_per_sec": 0 00:23:16.379 }, 00:23:16.379 "claimed": true, 00:23:16.379 "claim_type": "exclusive_write", 00:23:16.379 "zoned": false, 00:23:16.379 "supported_io_types": { 00:23:16.379 "read": true, 00:23:16.379 "write": true, 00:23:16.379 "unmap": true, 00:23:16.379 "flush": true, 00:23:16.379 "reset": true, 00:23:16.379 "nvme_admin": false, 00:23:16.379 "nvme_io": false, 00:23:16.379 "nvme_io_md": false, 00:23:16.379 "write_zeroes": true, 00:23:16.379 "zcopy": true, 00:23:16.379 "get_zone_info": false, 00:23:16.379 "zone_management": false, 00:23:16.379 "zone_append": false, 00:23:16.379 "compare": false, 00:23:16.379 "compare_and_write": false, 00:23:16.379 "abort": true, 00:23:16.379 "seek_hole": false, 00:23:16.379 "seek_data": false, 00:23:16.379 "copy": true, 00:23:16.379 "nvme_iov_md": false 00:23:16.379 }, 00:23:16.379 "memory_domains": [ 00:23:16.379 { 00:23:16.379 "dma_device_id": "system", 00:23:16.379 "dma_device_type": 1 00:23:16.379 }, 00:23:16.379 { 00:23:16.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.379 "dma_device_type": 2 00:23:16.379 } 00:23:16.379 ], 00:23:16.379 "driver_specific": {} 00:23:16.379 } 00:23:16.379 ] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.379 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.638 "name": "Existed_Raid", 00:23:16.638 "uuid": "a46d2278-a0f2-4178-9049-d2ec6020d84e", 00:23:16.638 "strip_size_kb": 64, 00:23:16.638 "state": "configuring", 00:23:16.638 "raid_level": "raid5f", 00:23:16.638 "superblock": true, 00:23:16.638 "num_base_bdevs": 4, 00:23:16.638 "num_base_bdevs_discovered": 1, 00:23:16.638 "num_base_bdevs_operational": 4, 00:23:16.638 "base_bdevs_list": [ 00:23:16.638 { 00:23:16.638 "name": "BaseBdev1", 00:23:16.638 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:16.638 "is_configured": true, 00:23:16.638 "data_offset": 2048, 00:23:16.638 "data_size": 63488 00:23:16.638 }, 00:23:16.638 { 00:23:16.638 "name": "BaseBdev2", 00:23:16.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.638 "is_configured": false, 00:23:16.638 "data_offset": 0, 00:23:16.638 "data_size": 0 00:23:16.638 }, 00:23:16.638 { 00:23:16.638 "name": "BaseBdev3", 00:23:16.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.638 "is_configured": false, 00:23:16.638 "data_offset": 0, 00:23:16.638 "data_size": 0 00:23:16.638 }, 00:23:16.638 { 00:23:16.638 "name": "BaseBdev4", 00:23:16.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.638 "is_configured": false, 00:23:16.638 "data_offset": 0, 00:23:16.638 "data_size": 0 00:23:16.638 } 00:23:16.638 ] 00:23:16.638 }' 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.638 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.908 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:16.908 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.908 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.908 [2024-11-06 09:16:15.863433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:16.908 [2024-11-06 09:16:15.863497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.909 [2024-11-06 09:16:15.871577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.909 [2024-11-06 09:16:15.873861] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.909 [2024-11-06 09:16:15.873913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.909 [2024-11-06 09:16:15.873925] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:16.909 [2024-11-06 09:16:15.873942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:16.909 [2024-11-06 09:16:15.873950] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:16.909 [2024-11-06 09:16:15.873963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.909 "name": "Existed_Raid", 00:23:16.909 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:16.909 "strip_size_kb": 64, 00:23:16.909 "state": "configuring", 00:23:16.909 "raid_level": "raid5f", 00:23:16.909 "superblock": true, 00:23:16.909 "num_base_bdevs": 4, 00:23:16.909 "num_base_bdevs_discovered": 1, 00:23:16.909 "num_base_bdevs_operational": 4, 00:23:16.909 "base_bdevs_list": [ 00:23:16.909 { 00:23:16.909 "name": "BaseBdev1", 00:23:16.909 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:16.909 "is_configured": true, 00:23:16.909 "data_offset": 2048, 00:23:16.909 "data_size": 63488 00:23:16.909 }, 00:23:16.909 { 00:23:16.909 "name": "BaseBdev2", 00:23:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.909 "is_configured": false, 00:23:16.909 "data_offset": 0, 00:23:16.909 "data_size": 0 00:23:16.909 }, 00:23:16.909 { 00:23:16.909 "name": "BaseBdev3", 00:23:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.909 "is_configured": false, 00:23:16.909 "data_offset": 0, 00:23:16.909 "data_size": 0 00:23:16.909 }, 00:23:16.909 { 00:23:16.909 "name": "BaseBdev4", 00:23:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.909 "is_configured": false, 00:23:16.909 "data_offset": 0, 00:23:16.909 "data_size": 0 00:23:16.909 } 00:23:16.909 ] 00:23:16.909 }' 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.909 09:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.498 [2024-11-06 09:16:16.356397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:17.498 BaseBdev2 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.498 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.498 [ 00:23:17.498 { 00:23:17.498 "name": "BaseBdev2", 00:23:17.498 "aliases": [ 00:23:17.499 "26fdc907-d476-4f72-9948-63785ce66665" 00:23:17.499 ], 00:23:17.499 "product_name": "Malloc disk", 00:23:17.499 "block_size": 512, 00:23:17.499 "num_blocks": 65536, 00:23:17.499 "uuid": "26fdc907-d476-4f72-9948-63785ce66665", 00:23:17.499 "assigned_rate_limits": { 00:23:17.499 "rw_ios_per_sec": 0, 00:23:17.499 "rw_mbytes_per_sec": 0, 00:23:17.499 "r_mbytes_per_sec": 0, 00:23:17.499 "w_mbytes_per_sec": 0 00:23:17.499 }, 00:23:17.499 "claimed": true, 00:23:17.499 "claim_type": "exclusive_write", 00:23:17.499 "zoned": false, 00:23:17.499 "supported_io_types": { 00:23:17.499 "read": true, 00:23:17.499 "write": true, 00:23:17.499 "unmap": true, 00:23:17.499 "flush": true, 00:23:17.499 "reset": true, 00:23:17.499 "nvme_admin": false, 00:23:17.499 "nvme_io": false, 00:23:17.499 "nvme_io_md": false, 00:23:17.499 "write_zeroes": true, 00:23:17.499 "zcopy": true, 00:23:17.499 "get_zone_info": false, 00:23:17.499 "zone_management": false, 00:23:17.499 "zone_append": false, 00:23:17.499 "compare": false, 00:23:17.499 "compare_and_write": false, 00:23:17.499 "abort": true, 00:23:17.499 "seek_hole": false, 00:23:17.499 "seek_data": false, 00:23:17.499 "copy": true, 00:23:17.499 "nvme_iov_md": false 00:23:17.499 }, 00:23:17.499 "memory_domains": [ 00:23:17.499 { 00:23:17.499 "dma_device_id": "system", 00:23:17.499 "dma_device_type": 1 00:23:17.499 }, 00:23:17.499 { 00:23:17.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.499 "dma_device_type": 2 00:23:17.499 } 00:23:17.499 ], 00:23:17.499 "driver_specific": {} 00:23:17.499 } 00:23:17.499 ] 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.499 "name": "Existed_Raid", 00:23:17.499 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:17.499 "strip_size_kb": 64, 00:23:17.499 "state": "configuring", 00:23:17.499 "raid_level": "raid5f", 00:23:17.499 "superblock": true, 00:23:17.499 "num_base_bdevs": 4, 00:23:17.499 "num_base_bdevs_discovered": 2, 00:23:17.499 "num_base_bdevs_operational": 4, 00:23:17.499 "base_bdevs_list": [ 00:23:17.499 { 00:23:17.499 "name": "BaseBdev1", 00:23:17.499 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:17.499 "is_configured": true, 00:23:17.499 "data_offset": 2048, 00:23:17.499 "data_size": 63488 00:23:17.499 }, 00:23:17.499 { 00:23:17.499 "name": "BaseBdev2", 00:23:17.499 "uuid": "26fdc907-d476-4f72-9948-63785ce66665", 00:23:17.499 "is_configured": true, 00:23:17.499 "data_offset": 2048, 00:23:17.499 "data_size": 63488 00:23:17.499 }, 00:23:17.499 { 00:23:17.499 "name": "BaseBdev3", 00:23:17.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.499 "is_configured": false, 00:23:17.499 "data_offset": 0, 00:23:17.499 "data_size": 0 00:23:17.499 }, 00:23:17.499 { 00:23:17.499 "name": "BaseBdev4", 00:23:17.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.499 "is_configured": false, 00:23:17.499 "data_offset": 0, 00:23:17.499 "data_size": 0 00:23:17.499 } 00:23:17.499 ] 00:23:17.499 }' 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.499 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 [2024-11-06 09:16:16.930111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:18.066 BaseBdev3 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 [ 00:23:18.066 { 00:23:18.066 "name": "BaseBdev3", 00:23:18.066 "aliases": [ 00:23:18.066 "bc57efb1-9ff5-47ab-99a2-1570ad4a4a5e" 00:23:18.066 ], 00:23:18.066 "product_name": "Malloc disk", 00:23:18.066 "block_size": 512, 00:23:18.066 "num_blocks": 65536, 00:23:18.066 "uuid": "bc57efb1-9ff5-47ab-99a2-1570ad4a4a5e", 00:23:18.066 "assigned_rate_limits": { 00:23:18.066 "rw_ios_per_sec": 0, 00:23:18.066 "rw_mbytes_per_sec": 0, 00:23:18.066 "r_mbytes_per_sec": 0, 00:23:18.066 "w_mbytes_per_sec": 0 00:23:18.066 }, 00:23:18.066 "claimed": true, 00:23:18.066 "claim_type": "exclusive_write", 00:23:18.066 "zoned": false, 00:23:18.066 "supported_io_types": { 00:23:18.066 "read": true, 00:23:18.066 "write": true, 00:23:18.066 "unmap": true, 00:23:18.066 "flush": true, 00:23:18.066 "reset": true, 00:23:18.066 "nvme_admin": false, 00:23:18.066 "nvme_io": false, 00:23:18.066 "nvme_io_md": false, 00:23:18.066 "write_zeroes": true, 00:23:18.066 "zcopy": true, 00:23:18.066 "get_zone_info": false, 00:23:18.066 "zone_management": false, 00:23:18.066 "zone_append": false, 00:23:18.066 "compare": false, 00:23:18.066 "compare_and_write": false, 00:23:18.066 "abort": true, 00:23:18.066 "seek_hole": false, 00:23:18.066 "seek_data": false, 00:23:18.066 "copy": true, 00:23:18.066 "nvme_iov_md": false 00:23:18.066 }, 00:23:18.066 "memory_domains": [ 00:23:18.066 { 00:23:18.066 "dma_device_id": "system", 00:23:18.066 "dma_device_type": 1 00:23:18.066 }, 00:23:18.066 { 00:23:18.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.066 "dma_device_type": 2 00:23:18.066 } 00:23:18.066 ], 00:23:18.066 "driver_specific": {} 00:23:18.066 } 00:23:18.066 ] 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 09:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.066 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.066 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.066 "name": "Existed_Raid", 00:23:18.066 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:18.066 "strip_size_kb": 64, 00:23:18.066 "state": "configuring", 00:23:18.066 "raid_level": "raid5f", 00:23:18.066 "superblock": true, 00:23:18.066 "num_base_bdevs": 4, 00:23:18.066 "num_base_bdevs_discovered": 3, 00:23:18.066 "num_base_bdevs_operational": 4, 00:23:18.066 "base_bdevs_list": [ 00:23:18.066 { 00:23:18.066 "name": "BaseBdev1", 00:23:18.066 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:18.066 "is_configured": true, 00:23:18.066 "data_offset": 2048, 00:23:18.066 "data_size": 63488 00:23:18.066 }, 00:23:18.066 { 00:23:18.066 "name": "BaseBdev2", 00:23:18.066 "uuid": "26fdc907-d476-4f72-9948-63785ce66665", 00:23:18.066 "is_configured": true, 00:23:18.066 "data_offset": 2048, 00:23:18.066 "data_size": 63488 00:23:18.066 }, 00:23:18.066 { 00:23:18.066 "name": "BaseBdev3", 00:23:18.066 "uuid": "bc57efb1-9ff5-47ab-99a2-1570ad4a4a5e", 00:23:18.066 "is_configured": true, 00:23:18.066 "data_offset": 2048, 00:23:18.066 "data_size": 63488 00:23:18.066 }, 00:23:18.066 { 00:23:18.066 "name": "BaseBdev4", 00:23:18.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.066 "is_configured": false, 00:23:18.066 "data_offset": 0, 00:23:18.066 "data_size": 0 00:23:18.066 } 00:23:18.066 ] 00:23:18.066 }' 00:23:18.066 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.066 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.633 [2024-11-06 09:16:17.495562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:18.633 [2024-11-06 09:16:17.495898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:18.633 [2024-11-06 09:16:17.495915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:18.633 [2024-11-06 09:16:17.496225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:18.633 BaseBdev4 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.633 [2024-11-06 09:16:17.504607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:18.633 [2024-11-06 09:16:17.504641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:18.633 [2024-11-06 09:16:17.504966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.633 [ 00:23:18.633 { 00:23:18.633 "name": "BaseBdev4", 00:23:18.633 "aliases": [ 00:23:18.633 "862c72ad-a179-4cc5-8a28-edacb6b8ab21" 00:23:18.633 ], 00:23:18.633 "product_name": "Malloc disk", 00:23:18.633 "block_size": 512, 00:23:18.633 "num_blocks": 65536, 00:23:18.633 "uuid": "862c72ad-a179-4cc5-8a28-edacb6b8ab21", 00:23:18.633 "assigned_rate_limits": { 00:23:18.633 "rw_ios_per_sec": 0, 00:23:18.633 "rw_mbytes_per_sec": 0, 00:23:18.633 "r_mbytes_per_sec": 0, 00:23:18.633 "w_mbytes_per_sec": 0 00:23:18.633 }, 00:23:18.633 "claimed": true, 00:23:18.633 "claim_type": "exclusive_write", 00:23:18.633 "zoned": false, 00:23:18.633 "supported_io_types": { 00:23:18.633 "read": true, 00:23:18.633 "write": true, 00:23:18.633 "unmap": true, 00:23:18.633 "flush": true, 00:23:18.633 "reset": true, 00:23:18.633 "nvme_admin": false, 00:23:18.633 "nvme_io": false, 00:23:18.633 "nvme_io_md": false, 00:23:18.633 "write_zeroes": true, 00:23:18.633 "zcopy": true, 00:23:18.633 "get_zone_info": false, 00:23:18.633 "zone_management": false, 00:23:18.633 "zone_append": false, 00:23:18.633 "compare": false, 00:23:18.633 "compare_and_write": false, 00:23:18.633 "abort": true, 00:23:18.633 "seek_hole": false, 00:23:18.633 "seek_data": false, 00:23:18.633 "copy": true, 00:23:18.633 "nvme_iov_md": false 00:23:18.633 }, 00:23:18.633 "memory_domains": [ 00:23:18.633 { 00:23:18.633 "dma_device_id": "system", 00:23:18.633 "dma_device_type": 1 00:23:18.633 }, 00:23:18.633 { 00:23:18.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.633 "dma_device_type": 2 00:23:18.633 } 00:23:18.633 ], 00:23:18.633 "driver_specific": {} 00:23:18.633 } 00:23:18.633 ] 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.633 "name": "Existed_Raid", 00:23:18.633 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:18.633 "strip_size_kb": 64, 00:23:18.633 "state": "online", 00:23:18.633 "raid_level": "raid5f", 00:23:18.633 "superblock": true, 00:23:18.633 "num_base_bdevs": 4, 00:23:18.633 "num_base_bdevs_discovered": 4, 00:23:18.633 "num_base_bdevs_operational": 4, 00:23:18.633 "base_bdevs_list": [ 00:23:18.633 { 00:23:18.633 "name": "BaseBdev1", 00:23:18.633 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:18.633 "is_configured": true, 00:23:18.633 "data_offset": 2048, 00:23:18.633 "data_size": 63488 00:23:18.633 }, 00:23:18.633 { 00:23:18.633 "name": "BaseBdev2", 00:23:18.633 "uuid": "26fdc907-d476-4f72-9948-63785ce66665", 00:23:18.633 "is_configured": true, 00:23:18.633 "data_offset": 2048, 00:23:18.633 "data_size": 63488 00:23:18.633 }, 00:23:18.633 { 00:23:18.633 "name": "BaseBdev3", 00:23:18.633 "uuid": "bc57efb1-9ff5-47ab-99a2-1570ad4a4a5e", 00:23:18.633 "is_configured": true, 00:23:18.633 "data_offset": 2048, 00:23:18.633 "data_size": 63488 00:23:18.633 }, 00:23:18.633 { 00:23:18.633 "name": "BaseBdev4", 00:23:18.633 "uuid": "862c72ad-a179-4cc5-8a28-edacb6b8ab21", 00:23:18.633 "is_configured": true, 00:23:18.633 "data_offset": 2048, 00:23:18.633 "data_size": 63488 00:23:18.633 } 00:23:18.633 ] 00:23:18.633 }' 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.633 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.203 09:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.203 [2024-11-06 09:16:18.005654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.203 "name": "Existed_Raid", 00:23:19.203 "aliases": [ 00:23:19.203 "d3897a6d-4d14-463b-a0d0-d48dbbd53e12" 00:23:19.203 ], 00:23:19.203 "product_name": "Raid Volume", 00:23:19.203 "block_size": 512, 00:23:19.203 "num_blocks": 190464, 00:23:19.203 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:19.203 "assigned_rate_limits": { 00:23:19.203 "rw_ios_per_sec": 0, 00:23:19.203 "rw_mbytes_per_sec": 0, 00:23:19.203 "r_mbytes_per_sec": 0, 00:23:19.203 "w_mbytes_per_sec": 0 00:23:19.203 }, 00:23:19.203 "claimed": false, 00:23:19.203 "zoned": false, 00:23:19.203 "supported_io_types": { 00:23:19.203 "read": true, 00:23:19.203 "write": true, 00:23:19.203 "unmap": false, 00:23:19.203 "flush": false, 00:23:19.203 "reset": true, 00:23:19.203 "nvme_admin": false, 00:23:19.203 "nvme_io": false, 00:23:19.203 "nvme_io_md": false, 00:23:19.203 "write_zeroes": true, 00:23:19.203 "zcopy": false, 00:23:19.203 "get_zone_info": false, 00:23:19.203 "zone_management": false, 00:23:19.203 "zone_append": false, 00:23:19.203 "compare": false, 00:23:19.203 "compare_and_write": false, 00:23:19.203 "abort": false, 00:23:19.203 "seek_hole": false, 00:23:19.203 "seek_data": false, 00:23:19.203 "copy": false, 00:23:19.203 "nvme_iov_md": false 00:23:19.203 }, 00:23:19.203 "driver_specific": { 00:23:19.203 "raid": { 00:23:19.203 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:19.203 "strip_size_kb": 64, 00:23:19.203 "state": "online", 00:23:19.203 "raid_level": "raid5f", 00:23:19.203 "superblock": true, 00:23:19.203 "num_base_bdevs": 4, 00:23:19.203 "num_base_bdevs_discovered": 4, 00:23:19.203 "num_base_bdevs_operational": 4, 00:23:19.203 "base_bdevs_list": [ 00:23:19.203 { 00:23:19.203 "name": "BaseBdev1", 00:23:19.203 "uuid": "41699241-8a2a-48fc-9537-3a2406f05778", 00:23:19.203 "is_configured": true, 00:23:19.203 "data_offset": 2048, 00:23:19.203 "data_size": 63488 00:23:19.203 }, 00:23:19.203 { 00:23:19.203 "name": "BaseBdev2", 00:23:19.203 "uuid": "26fdc907-d476-4f72-9948-63785ce66665", 00:23:19.203 "is_configured": true, 00:23:19.203 "data_offset": 2048, 00:23:19.203 "data_size": 63488 00:23:19.203 }, 00:23:19.203 { 00:23:19.203 "name": "BaseBdev3", 00:23:19.203 "uuid": "bc57efb1-9ff5-47ab-99a2-1570ad4a4a5e", 00:23:19.203 "is_configured": true, 00:23:19.203 "data_offset": 2048, 00:23:19.203 "data_size": 63488 00:23:19.203 }, 00:23:19.203 { 00:23:19.203 "name": "BaseBdev4", 00:23:19.203 "uuid": "862c72ad-a179-4cc5-8a28-edacb6b8ab21", 00:23:19.203 "is_configured": true, 00:23:19.203 "data_offset": 2048, 00:23:19.203 "data_size": 63488 00:23:19.203 } 00:23:19.203 ] 00:23:19.203 } 00:23:19.203 } 00:23:19.203 }' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:19.203 BaseBdev2 00:23:19.203 BaseBdev3 00:23:19.203 BaseBdev4' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.203 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 [2024-11-06 09:16:18.348914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.725 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.725 "name": "Existed_Raid", 00:23:19.725 "uuid": "d3897a6d-4d14-463b-a0d0-d48dbbd53e12", 00:23:19.725 "strip_size_kb": 64, 00:23:19.725 "state": "online", 00:23:19.725 "raid_level": "raid5f", 00:23:19.725 "superblock": true, 00:23:19.725 "num_base_bdevs": 4, 00:23:19.725 "num_base_bdevs_discovered": 3, 00:23:19.725 "num_base_bdevs_operational": 3, 00:23:19.725 "base_bdevs_list": [ 00:23:19.725 { 00:23:19.725 "name": null, 00:23:19.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.725 "is_configured": false, 00:23:19.725 "data_offset": 0, 00:23:19.725 "data_size": 63488 00:23:19.725 }, 00:23:19.725 { 00:23:19.725 "name": "BaseBdev2", 00:23:19.725 "uuid": "26fdc907-d476-4f72-9948-63785ce66665", 00:23:19.725 "is_configured": true, 00:23:19.725 "data_offset": 2048, 00:23:19.725 "data_size": 63488 00:23:19.725 }, 00:23:19.725 { 00:23:19.725 "name": "BaseBdev3", 00:23:19.725 "uuid": "bc57efb1-9ff5-47ab-99a2-1570ad4a4a5e", 00:23:19.725 "is_configured": true, 00:23:19.725 "data_offset": 2048, 00:23:19.725 "data_size": 63488 00:23:19.725 }, 00:23:19.725 { 00:23:19.725 "name": "BaseBdev4", 00:23:19.725 "uuid": "862c72ad-a179-4cc5-8a28-edacb6b8ab21", 00:23:19.725 "is_configured": true, 00:23:19.725 "data_offset": 2048, 00:23:19.725 "data_size": 63488 00:23:19.725 } 00:23:19.725 ] 00:23:19.725 }' 00:23:19.725 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.725 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.983 09:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.983 [2024-11-06 09:16:18.940804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:19.984 [2024-11-06 09:16:18.940978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.243 [2024-11-06 09:16:19.045589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.243 [2024-11-06 09:16:19.105554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.243 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.243 [2024-11-06 09:16:19.266309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:20.243 [2024-11-06 09:16:19.266367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.502 BaseBdev2 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.502 [ 00:23:20.502 { 00:23:20.502 "name": "BaseBdev2", 00:23:20.502 "aliases": [ 00:23:20.502 "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe" 00:23:20.502 ], 00:23:20.502 "product_name": "Malloc disk", 00:23:20.502 "block_size": 512, 00:23:20.502 "num_blocks": 65536, 00:23:20.502 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:20.502 "assigned_rate_limits": { 00:23:20.502 "rw_ios_per_sec": 0, 00:23:20.502 "rw_mbytes_per_sec": 0, 00:23:20.502 "r_mbytes_per_sec": 0, 00:23:20.502 "w_mbytes_per_sec": 0 00:23:20.502 }, 00:23:20.502 "claimed": false, 00:23:20.502 "zoned": false, 00:23:20.502 "supported_io_types": { 00:23:20.502 "read": true, 00:23:20.502 "write": true, 00:23:20.502 "unmap": true, 00:23:20.502 "flush": true, 00:23:20.502 "reset": true, 00:23:20.502 "nvme_admin": false, 00:23:20.502 "nvme_io": false, 00:23:20.502 "nvme_io_md": false, 00:23:20.502 "write_zeroes": true, 00:23:20.502 "zcopy": true, 00:23:20.502 "get_zone_info": false, 00:23:20.502 "zone_management": false, 00:23:20.502 "zone_append": false, 00:23:20.502 "compare": false, 00:23:20.502 "compare_and_write": false, 00:23:20.502 "abort": true, 00:23:20.502 "seek_hole": false, 00:23:20.502 "seek_data": false, 00:23:20.502 "copy": true, 00:23:20.502 "nvme_iov_md": false 00:23:20.502 }, 00:23:20.502 "memory_domains": [ 00:23:20.502 { 00:23:20.502 "dma_device_id": "system", 00:23:20.502 "dma_device_type": 1 00:23:20.502 }, 00:23:20.502 { 00:23:20.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.502 "dma_device_type": 2 00:23:20.502 } 00:23:20.502 ], 00:23:20.502 "driver_specific": {} 00:23:20.502 } 00:23:20.502 ] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.502 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.764 BaseBdev3 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.764 [ 00:23:20.764 { 00:23:20.764 "name": "BaseBdev3", 00:23:20.764 "aliases": [ 00:23:20.764 "49a88fa3-b4c6-4d6d-a837-392a2b01b76f" 00:23:20.764 ], 00:23:20.764 "product_name": "Malloc disk", 00:23:20.764 "block_size": 512, 00:23:20.764 "num_blocks": 65536, 00:23:20.764 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:20.764 "assigned_rate_limits": { 00:23:20.764 "rw_ios_per_sec": 0, 00:23:20.764 "rw_mbytes_per_sec": 0, 00:23:20.764 "r_mbytes_per_sec": 0, 00:23:20.764 "w_mbytes_per_sec": 0 00:23:20.764 }, 00:23:20.764 "claimed": false, 00:23:20.764 "zoned": false, 00:23:20.764 "supported_io_types": { 00:23:20.764 "read": true, 00:23:20.764 "write": true, 00:23:20.764 "unmap": true, 00:23:20.764 "flush": true, 00:23:20.764 "reset": true, 00:23:20.764 "nvme_admin": false, 00:23:20.764 "nvme_io": false, 00:23:20.764 "nvme_io_md": false, 00:23:20.764 "write_zeroes": true, 00:23:20.764 "zcopy": true, 00:23:20.764 "get_zone_info": false, 00:23:20.764 "zone_management": false, 00:23:20.764 "zone_append": false, 00:23:20.764 "compare": false, 00:23:20.764 "compare_and_write": false, 00:23:20.764 "abort": true, 00:23:20.764 "seek_hole": false, 00:23:20.764 "seek_data": false, 00:23:20.764 "copy": true, 00:23:20.764 "nvme_iov_md": false 00:23:20.764 }, 00:23:20.764 "memory_domains": [ 00:23:20.764 { 00:23:20.764 "dma_device_id": "system", 00:23:20.764 "dma_device_type": 1 00:23:20.764 }, 00:23:20.764 { 00:23:20.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.764 "dma_device_type": 2 00:23:20.764 } 00:23:20.764 ], 00:23:20.764 "driver_specific": {} 00:23:20.764 } 00:23:20.764 ] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.764 BaseBdev4 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.764 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.764 [ 00:23:20.764 { 00:23:20.764 "name": "BaseBdev4", 00:23:20.764 "aliases": [ 00:23:20.764 "3a19d192-302b-43cc-90c8-911bf1650c4e" 00:23:20.764 ], 00:23:20.764 "product_name": "Malloc disk", 00:23:20.764 "block_size": 512, 00:23:20.764 "num_blocks": 65536, 00:23:20.764 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:20.764 "assigned_rate_limits": { 00:23:20.764 "rw_ios_per_sec": 0, 00:23:20.764 "rw_mbytes_per_sec": 0, 00:23:20.764 "r_mbytes_per_sec": 0, 00:23:20.764 "w_mbytes_per_sec": 0 00:23:20.764 }, 00:23:20.764 "claimed": false, 00:23:20.764 "zoned": false, 00:23:20.765 "supported_io_types": { 00:23:20.765 "read": true, 00:23:20.765 "write": true, 00:23:20.765 "unmap": true, 00:23:20.765 "flush": true, 00:23:20.765 "reset": true, 00:23:20.765 "nvme_admin": false, 00:23:20.765 "nvme_io": false, 00:23:20.765 "nvme_io_md": false, 00:23:20.765 "write_zeroes": true, 00:23:20.765 "zcopy": true, 00:23:20.765 "get_zone_info": false, 00:23:20.765 "zone_management": false, 00:23:20.765 "zone_append": false, 00:23:20.765 "compare": false, 00:23:20.765 "compare_and_write": false, 00:23:20.765 "abort": true, 00:23:20.765 "seek_hole": false, 00:23:20.765 "seek_data": false, 00:23:20.765 "copy": true, 00:23:20.765 "nvme_iov_md": false 00:23:20.765 }, 00:23:20.765 "memory_domains": [ 00:23:20.765 { 00:23:20.765 "dma_device_id": "system", 00:23:20.765 "dma_device_type": 1 00:23:20.765 }, 00:23:20.765 { 00:23:20.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.765 "dma_device_type": 2 00:23:20.765 } 00:23:20.765 ], 00:23:20.765 "driver_specific": {} 00:23:20.765 } 00:23:20.765 ] 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.765 [2024-11-06 09:16:19.696431] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.765 [2024-11-06 09:16:19.696481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.765 [2024-11-06 09:16:19.696508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:20.765 [2024-11-06 09:16:19.698725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.765 [2024-11-06 09:16:19.698781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.765 "name": "Existed_Raid", 00:23:20.765 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:20.765 "strip_size_kb": 64, 00:23:20.765 "state": "configuring", 00:23:20.765 "raid_level": "raid5f", 00:23:20.765 "superblock": true, 00:23:20.765 "num_base_bdevs": 4, 00:23:20.765 "num_base_bdevs_discovered": 3, 00:23:20.765 "num_base_bdevs_operational": 4, 00:23:20.765 "base_bdevs_list": [ 00:23:20.765 { 00:23:20.765 "name": "BaseBdev1", 00:23:20.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.765 "is_configured": false, 00:23:20.765 "data_offset": 0, 00:23:20.765 "data_size": 0 00:23:20.765 }, 00:23:20.765 { 00:23:20.765 "name": "BaseBdev2", 00:23:20.765 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:20.765 "is_configured": true, 00:23:20.765 "data_offset": 2048, 00:23:20.765 "data_size": 63488 00:23:20.765 }, 00:23:20.765 { 00:23:20.765 "name": "BaseBdev3", 00:23:20.765 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:20.765 "is_configured": true, 00:23:20.765 "data_offset": 2048, 00:23:20.765 "data_size": 63488 00:23:20.765 }, 00:23:20.765 { 00:23:20.765 "name": "BaseBdev4", 00:23:20.765 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:20.765 "is_configured": true, 00:23:20.765 "data_offset": 2048, 00:23:20.765 "data_size": 63488 00:23:20.765 } 00:23:20.765 ] 00:23:20.765 }' 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.765 09:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.338 [2024-11-06 09:16:20.107926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.338 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.338 "name": "Existed_Raid", 00:23:21.338 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:21.338 "strip_size_kb": 64, 00:23:21.338 "state": "configuring", 00:23:21.338 "raid_level": "raid5f", 00:23:21.338 "superblock": true, 00:23:21.338 "num_base_bdevs": 4, 00:23:21.338 "num_base_bdevs_discovered": 2, 00:23:21.338 "num_base_bdevs_operational": 4, 00:23:21.338 "base_bdevs_list": [ 00:23:21.338 { 00:23:21.338 "name": "BaseBdev1", 00:23:21.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.338 "is_configured": false, 00:23:21.338 "data_offset": 0, 00:23:21.338 "data_size": 0 00:23:21.338 }, 00:23:21.338 { 00:23:21.338 "name": null, 00:23:21.338 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:21.338 "is_configured": false, 00:23:21.338 "data_offset": 0, 00:23:21.338 "data_size": 63488 00:23:21.338 }, 00:23:21.338 { 00:23:21.338 "name": "BaseBdev3", 00:23:21.338 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:21.338 "is_configured": true, 00:23:21.338 "data_offset": 2048, 00:23:21.338 "data_size": 63488 00:23:21.338 }, 00:23:21.338 { 00:23:21.338 "name": "BaseBdev4", 00:23:21.338 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:21.338 "is_configured": true, 00:23:21.339 "data_offset": 2048, 00:23:21.339 "data_size": 63488 00:23:21.339 } 00:23:21.339 ] 00:23:21.339 }' 00:23:21.339 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.339 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.597 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 [2024-11-06 09:16:20.653091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.856 BaseBdev1 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.856 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.856 [ 00:23:21.856 { 00:23:21.856 "name": "BaseBdev1", 00:23:21.856 "aliases": [ 00:23:21.856 "13c7a75e-b31b-4303-b55f-83baa6e4499d" 00:23:21.856 ], 00:23:21.857 "product_name": "Malloc disk", 00:23:21.857 "block_size": 512, 00:23:21.857 "num_blocks": 65536, 00:23:21.857 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:21.857 "assigned_rate_limits": { 00:23:21.857 "rw_ios_per_sec": 0, 00:23:21.857 "rw_mbytes_per_sec": 0, 00:23:21.857 "r_mbytes_per_sec": 0, 00:23:21.857 "w_mbytes_per_sec": 0 00:23:21.857 }, 00:23:21.857 "claimed": true, 00:23:21.857 "claim_type": "exclusive_write", 00:23:21.857 "zoned": false, 00:23:21.857 "supported_io_types": { 00:23:21.857 "read": true, 00:23:21.857 "write": true, 00:23:21.857 "unmap": true, 00:23:21.857 "flush": true, 00:23:21.857 "reset": true, 00:23:21.857 "nvme_admin": false, 00:23:21.857 "nvme_io": false, 00:23:21.857 "nvme_io_md": false, 00:23:21.857 "write_zeroes": true, 00:23:21.857 "zcopy": true, 00:23:21.857 "get_zone_info": false, 00:23:21.857 "zone_management": false, 00:23:21.857 "zone_append": false, 00:23:21.857 "compare": false, 00:23:21.857 "compare_and_write": false, 00:23:21.857 "abort": true, 00:23:21.857 "seek_hole": false, 00:23:21.857 "seek_data": false, 00:23:21.857 "copy": true, 00:23:21.857 "nvme_iov_md": false 00:23:21.857 }, 00:23:21.857 "memory_domains": [ 00:23:21.857 { 00:23:21.857 "dma_device_id": "system", 00:23:21.857 "dma_device_type": 1 00:23:21.857 }, 00:23:21.857 { 00:23:21.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.857 "dma_device_type": 2 00:23:21.857 } 00:23:21.857 ], 00:23:21.857 "driver_specific": {} 00:23:21.857 } 00:23:21.857 ] 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.857 "name": "Existed_Raid", 00:23:21.857 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:21.857 "strip_size_kb": 64, 00:23:21.857 "state": "configuring", 00:23:21.857 "raid_level": "raid5f", 00:23:21.857 "superblock": true, 00:23:21.857 "num_base_bdevs": 4, 00:23:21.857 "num_base_bdevs_discovered": 3, 00:23:21.857 "num_base_bdevs_operational": 4, 00:23:21.857 "base_bdevs_list": [ 00:23:21.857 { 00:23:21.857 "name": "BaseBdev1", 00:23:21.857 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:21.857 "is_configured": true, 00:23:21.857 "data_offset": 2048, 00:23:21.857 "data_size": 63488 00:23:21.857 }, 00:23:21.857 { 00:23:21.857 "name": null, 00:23:21.857 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:21.857 "is_configured": false, 00:23:21.857 "data_offset": 0, 00:23:21.857 "data_size": 63488 00:23:21.857 }, 00:23:21.857 { 00:23:21.857 "name": "BaseBdev3", 00:23:21.857 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:21.857 "is_configured": true, 00:23:21.857 "data_offset": 2048, 00:23:21.857 "data_size": 63488 00:23:21.857 }, 00:23:21.857 { 00:23:21.857 "name": "BaseBdev4", 00:23:21.857 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:21.857 "is_configured": true, 00:23:21.857 "data_offset": 2048, 00:23:21.857 "data_size": 63488 00:23:21.857 } 00:23:21.857 ] 00:23:21.857 }' 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.857 09:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.116 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.116 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:22.116 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.116 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.116 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.375 [2024-11-06 09:16:21.168509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.375 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.375 "name": "Existed_Raid", 00:23:22.375 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:22.376 "strip_size_kb": 64, 00:23:22.376 "state": "configuring", 00:23:22.376 "raid_level": "raid5f", 00:23:22.376 "superblock": true, 00:23:22.376 "num_base_bdevs": 4, 00:23:22.376 "num_base_bdevs_discovered": 2, 00:23:22.376 "num_base_bdevs_operational": 4, 00:23:22.376 "base_bdevs_list": [ 00:23:22.376 { 00:23:22.376 "name": "BaseBdev1", 00:23:22.376 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:22.376 "is_configured": true, 00:23:22.376 "data_offset": 2048, 00:23:22.376 "data_size": 63488 00:23:22.376 }, 00:23:22.376 { 00:23:22.376 "name": null, 00:23:22.376 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:22.376 "is_configured": false, 00:23:22.376 "data_offset": 0, 00:23:22.376 "data_size": 63488 00:23:22.376 }, 00:23:22.376 { 00:23:22.376 "name": null, 00:23:22.376 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:22.376 "is_configured": false, 00:23:22.376 "data_offset": 0, 00:23:22.376 "data_size": 63488 00:23:22.376 }, 00:23:22.376 { 00:23:22.376 "name": "BaseBdev4", 00:23:22.376 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:22.376 "is_configured": true, 00:23:22.376 "data_offset": 2048, 00:23:22.376 "data_size": 63488 00:23:22.376 } 00:23:22.376 ] 00:23:22.376 }' 00:23:22.376 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.376 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.636 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.894 [2024-11-06 09:16:21.675851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.894 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.895 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.895 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.895 "name": "Existed_Raid", 00:23:22.895 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:22.895 "strip_size_kb": 64, 00:23:22.895 "state": "configuring", 00:23:22.895 "raid_level": "raid5f", 00:23:22.895 "superblock": true, 00:23:22.895 "num_base_bdevs": 4, 00:23:22.895 "num_base_bdevs_discovered": 3, 00:23:22.895 "num_base_bdevs_operational": 4, 00:23:22.895 "base_bdevs_list": [ 00:23:22.895 { 00:23:22.895 "name": "BaseBdev1", 00:23:22.895 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:22.895 "is_configured": true, 00:23:22.895 "data_offset": 2048, 00:23:22.895 "data_size": 63488 00:23:22.895 }, 00:23:22.895 { 00:23:22.895 "name": null, 00:23:22.895 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:22.895 "is_configured": false, 00:23:22.895 "data_offset": 0, 00:23:22.895 "data_size": 63488 00:23:22.895 }, 00:23:22.895 { 00:23:22.895 "name": "BaseBdev3", 00:23:22.895 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:22.895 "is_configured": true, 00:23:22.895 "data_offset": 2048, 00:23:22.895 "data_size": 63488 00:23:22.895 }, 00:23:22.895 { 00:23:22.895 "name": "BaseBdev4", 00:23:22.895 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:22.895 "is_configured": true, 00:23:22.895 "data_offset": 2048, 00:23:22.895 "data_size": 63488 00:23:22.895 } 00:23:22.895 ] 00:23:22.895 }' 00:23:22.895 09:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.895 09:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.155 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.155 [2024-11-06 09:16:22.151228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.414 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.414 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:23.414 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.414 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.415 "name": "Existed_Raid", 00:23:23.415 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:23.415 "strip_size_kb": 64, 00:23:23.415 "state": "configuring", 00:23:23.415 "raid_level": "raid5f", 00:23:23.415 "superblock": true, 00:23:23.415 "num_base_bdevs": 4, 00:23:23.415 "num_base_bdevs_discovered": 2, 00:23:23.415 "num_base_bdevs_operational": 4, 00:23:23.415 "base_bdevs_list": [ 00:23:23.415 { 00:23:23.415 "name": null, 00:23:23.415 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:23.415 "is_configured": false, 00:23:23.415 "data_offset": 0, 00:23:23.415 "data_size": 63488 00:23:23.415 }, 00:23:23.415 { 00:23:23.415 "name": null, 00:23:23.415 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:23.415 "is_configured": false, 00:23:23.415 "data_offset": 0, 00:23:23.415 "data_size": 63488 00:23:23.415 }, 00:23:23.415 { 00:23:23.415 "name": "BaseBdev3", 00:23:23.415 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:23.415 "is_configured": true, 00:23:23.415 "data_offset": 2048, 00:23:23.415 "data_size": 63488 00:23:23.415 }, 00:23:23.415 { 00:23:23.415 "name": "BaseBdev4", 00:23:23.415 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:23.415 "is_configured": true, 00:23:23.415 "data_offset": 2048, 00:23:23.415 "data_size": 63488 00:23:23.415 } 00:23:23.415 ] 00:23:23.415 }' 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.415 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.673 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:23.673 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.673 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.673 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.932 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.933 [2024-11-06 09:16:22.753033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.933 "name": "Existed_Raid", 00:23:23.933 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:23.933 "strip_size_kb": 64, 00:23:23.933 "state": "configuring", 00:23:23.933 "raid_level": "raid5f", 00:23:23.933 "superblock": true, 00:23:23.933 "num_base_bdevs": 4, 00:23:23.933 "num_base_bdevs_discovered": 3, 00:23:23.933 "num_base_bdevs_operational": 4, 00:23:23.933 "base_bdevs_list": [ 00:23:23.933 { 00:23:23.933 "name": null, 00:23:23.933 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:23.933 "is_configured": false, 00:23:23.933 "data_offset": 0, 00:23:23.933 "data_size": 63488 00:23:23.933 }, 00:23:23.933 { 00:23:23.933 "name": "BaseBdev2", 00:23:23.933 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:23.933 "is_configured": true, 00:23:23.933 "data_offset": 2048, 00:23:23.933 "data_size": 63488 00:23:23.933 }, 00:23:23.933 { 00:23:23.933 "name": "BaseBdev3", 00:23:23.933 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:23.933 "is_configured": true, 00:23:23.933 "data_offset": 2048, 00:23:23.933 "data_size": 63488 00:23:23.933 }, 00:23:23.933 { 00:23:23.933 "name": "BaseBdev4", 00:23:23.933 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:23.933 "is_configured": true, 00:23:23.933 "data_offset": 2048, 00:23:23.933 "data_size": 63488 00:23:23.933 } 00:23:23.933 ] 00:23:23.933 }' 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.933 09:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.192 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.192 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.192 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.192 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:24.450 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.450 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13c7a75e-b31b-4303-b55f-83baa6e4499d 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 [2024-11-06 09:16:23.353483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:24.451 [2024-11-06 09:16:23.353757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:24.451 [2024-11-06 09:16:23.353774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:24.451 [2024-11-06 09:16:23.354059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:24.451 NewBaseBdev 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 [2024-11-06 09:16:23.361959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:24.451 [2024-11-06 09:16:23.362144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:24.451 [2024-11-06 09:16:23.362565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 [ 00:23:24.451 { 00:23:24.451 "name": "NewBaseBdev", 00:23:24.451 "aliases": [ 00:23:24.451 "13c7a75e-b31b-4303-b55f-83baa6e4499d" 00:23:24.451 ], 00:23:24.451 "product_name": "Malloc disk", 00:23:24.451 "block_size": 512, 00:23:24.451 "num_blocks": 65536, 00:23:24.451 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:24.451 "assigned_rate_limits": { 00:23:24.451 "rw_ios_per_sec": 0, 00:23:24.451 "rw_mbytes_per_sec": 0, 00:23:24.451 "r_mbytes_per_sec": 0, 00:23:24.451 "w_mbytes_per_sec": 0 00:23:24.451 }, 00:23:24.451 "claimed": true, 00:23:24.451 "claim_type": "exclusive_write", 00:23:24.451 "zoned": false, 00:23:24.451 "supported_io_types": { 00:23:24.451 "read": true, 00:23:24.451 "write": true, 00:23:24.451 "unmap": true, 00:23:24.451 "flush": true, 00:23:24.451 "reset": true, 00:23:24.451 "nvme_admin": false, 00:23:24.451 "nvme_io": false, 00:23:24.451 "nvme_io_md": false, 00:23:24.451 "write_zeroes": true, 00:23:24.451 "zcopy": true, 00:23:24.451 "get_zone_info": false, 00:23:24.451 "zone_management": false, 00:23:24.451 "zone_append": false, 00:23:24.451 "compare": false, 00:23:24.451 "compare_and_write": false, 00:23:24.451 "abort": true, 00:23:24.451 "seek_hole": false, 00:23:24.451 "seek_data": false, 00:23:24.451 "copy": true, 00:23:24.451 "nvme_iov_md": false 00:23:24.451 }, 00:23:24.451 "memory_domains": [ 00:23:24.451 { 00:23:24.451 "dma_device_id": "system", 00:23:24.451 "dma_device_type": 1 00:23:24.451 }, 00:23:24.451 { 00:23:24.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.451 "dma_device_type": 2 00:23:24.451 } 00:23:24.451 ], 00:23:24.451 "driver_specific": {} 00:23:24.451 } 00:23:24.451 ] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.451 "name": "Existed_Raid", 00:23:24.451 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:24.451 "strip_size_kb": 64, 00:23:24.451 "state": "online", 00:23:24.451 "raid_level": "raid5f", 00:23:24.451 "superblock": true, 00:23:24.451 "num_base_bdevs": 4, 00:23:24.451 "num_base_bdevs_discovered": 4, 00:23:24.451 "num_base_bdevs_operational": 4, 00:23:24.451 "base_bdevs_list": [ 00:23:24.451 { 00:23:24.451 "name": "NewBaseBdev", 00:23:24.451 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:24.451 "is_configured": true, 00:23:24.451 "data_offset": 2048, 00:23:24.451 "data_size": 63488 00:23:24.451 }, 00:23:24.451 { 00:23:24.451 "name": "BaseBdev2", 00:23:24.451 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:24.451 "is_configured": true, 00:23:24.451 "data_offset": 2048, 00:23:24.451 "data_size": 63488 00:23:24.451 }, 00:23:24.451 { 00:23:24.451 "name": "BaseBdev3", 00:23:24.451 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:24.451 "is_configured": true, 00:23:24.451 "data_offset": 2048, 00:23:24.451 "data_size": 63488 00:23:24.451 }, 00:23:24.451 { 00:23:24.451 "name": "BaseBdev4", 00:23:24.451 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:24.451 "is_configured": true, 00:23:24.451 "data_offset": 2048, 00:23:24.451 "data_size": 63488 00:23:24.451 } 00:23:24.451 ] 00:23:24.451 }' 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.451 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:25.020 [2024-11-06 09:16:23.859614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:25.020 "name": "Existed_Raid", 00:23:25.020 "aliases": [ 00:23:25.020 "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53" 00:23:25.020 ], 00:23:25.020 "product_name": "Raid Volume", 00:23:25.020 "block_size": 512, 00:23:25.020 "num_blocks": 190464, 00:23:25.020 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:25.020 "assigned_rate_limits": { 00:23:25.020 "rw_ios_per_sec": 0, 00:23:25.020 "rw_mbytes_per_sec": 0, 00:23:25.020 "r_mbytes_per_sec": 0, 00:23:25.020 "w_mbytes_per_sec": 0 00:23:25.020 }, 00:23:25.020 "claimed": false, 00:23:25.020 "zoned": false, 00:23:25.020 "supported_io_types": { 00:23:25.020 "read": true, 00:23:25.020 "write": true, 00:23:25.020 "unmap": false, 00:23:25.020 "flush": false, 00:23:25.020 "reset": true, 00:23:25.020 "nvme_admin": false, 00:23:25.020 "nvme_io": false, 00:23:25.020 "nvme_io_md": false, 00:23:25.020 "write_zeroes": true, 00:23:25.020 "zcopy": false, 00:23:25.020 "get_zone_info": false, 00:23:25.020 "zone_management": false, 00:23:25.020 "zone_append": false, 00:23:25.020 "compare": false, 00:23:25.020 "compare_and_write": false, 00:23:25.020 "abort": false, 00:23:25.020 "seek_hole": false, 00:23:25.020 "seek_data": false, 00:23:25.020 "copy": false, 00:23:25.020 "nvme_iov_md": false 00:23:25.020 }, 00:23:25.020 "driver_specific": { 00:23:25.020 "raid": { 00:23:25.020 "uuid": "d8d543df-8c56-4fc7-b7ae-5a1bc733cc53", 00:23:25.020 "strip_size_kb": 64, 00:23:25.020 "state": "online", 00:23:25.020 "raid_level": "raid5f", 00:23:25.020 "superblock": true, 00:23:25.020 "num_base_bdevs": 4, 00:23:25.020 "num_base_bdevs_discovered": 4, 00:23:25.020 "num_base_bdevs_operational": 4, 00:23:25.020 "base_bdevs_list": [ 00:23:25.020 { 00:23:25.020 "name": "NewBaseBdev", 00:23:25.020 "uuid": "13c7a75e-b31b-4303-b55f-83baa6e4499d", 00:23:25.020 "is_configured": true, 00:23:25.020 "data_offset": 2048, 00:23:25.020 "data_size": 63488 00:23:25.020 }, 00:23:25.020 { 00:23:25.020 "name": "BaseBdev2", 00:23:25.020 "uuid": "cd4b2ce3-ec56-45f0-b6de-175ba72f8bbe", 00:23:25.020 "is_configured": true, 00:23:25.020 "data_offset": 2048, 00:23:25.020 "data_size": 63488 00:23:25.020 }, 00:23:25.020 { 00:23:25.020 "name": "BaseBdev3", 00:23:25.020 "uuid": "49a88fa3-b4c6-4d6d-a837-392a2b01b76f", 00:23:25.020 "is_configured": true, 00:23:25.020 "data_offset": 2048, 00:23:25.020 "data_size": 63488 00:23:25.020 }, 00:23:25.020 { 00:23:25.020 "name": "BaseBdev4", 00:23:25.020 "uuid": "3a19d192-302b-43cc-90c8-911bf1650c4e", 00:23:25.020 "is_configured": true, 00:23:25.020 "data_offset": 2048, 00:23:25.020 "data_size": 63488 00:23:25.020 } 00:23:25.020 ] 00:23:25.020 } 00:23:25.020 } 00:23:25.020 }' 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:25.020 BaseBdev2 00:23:25.020 BaseBdev3 00:23:25.020 BaseBdev4' 00:23:25.020 09:16:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.020 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.279 [2024-11-06 09:16:24.154906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:25.279 [2024-11-06 09:16:24.154944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:25.279 [2024-11-06 09:16:24.155036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.279 [2024-11-06 09:16:24.155450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:25.279 [2024-11-06 09:16:24.155476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83148 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83148 ']' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83148 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83148 00:23:25.279 killing process with pid 83148 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:25.279 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:25.280 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83148' 00:23:25.280 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83148 00:23:25.280 [2024-11-06 09:16:24.194157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.280 09:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83148 00:23:25.845 [2024-11-06 09:16:24.633717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:27.224 09:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:27.224 00:23:27.224 real 0m11.954s 00:23:27.224 user 0m18.897s 00:23:27.224 sys 0m2.464s 00:23:27.224 09:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.224 09:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.224 ************************************ 00:23:27.224 END TEST raid5f_state_function_test_sb 00:23:27.224 ************************************ 00:23:27.224 09:16:25 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:27.224 09:16:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:27.224 09:16:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:27.224 09:16:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:27.224 ************************************ 00:23:27.224 START TEST raid5f_superblock_test 00:23:27.224 ************************************ 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83822 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83822 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 83822 ']' 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.224 09:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.224 [2024-11-06 09:16:26.015994] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:23:27.224 [2024-11-06 09:16:26.016127] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83822 ] 00:23:27.224 [2024-11-06 09:16:26.192434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.483 [2024-11-06 09:16:26.320873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.742 [2024-11-06 09:16:26.533199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.742 [2024-11-06 09:16:26.533269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 malloc1 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 [2024-11-06 09:16:26.908453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:28.001 [2024-11-06 09:16:26.908520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.001 [2024-11-06 09:16:26.908547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:28.001 [2024-11-06 09:16:26.908559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.001 [2024-11-06 09:16:26.910953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.001 [2024-11-06 09:16:26.910990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:28.001 pt1 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 malloc2 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 [2024-11-06 09:16:26.964453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.001 [2024-11-06 09:16:26.964506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.001 [2024-11-06 09:16:26.964530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:28.001 [2024-11-06 09:16:26.964541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.001 [2024-11-06 09:16:26.966891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.001 [2024-11-06 09:16:26.966927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.001 pt2 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.001 09:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 malloc3 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 [2024-11-06 09:16:27.033636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.001 [2024-11-06 09:16:27.033688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.001 [2024-11-06 09:16:27.033712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:28.001 [2024-11-06 09:16:27.033723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.001 [2024-11-06 09:16:27.036186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.001 [2024-11-06 09:16:27.036221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.001 pt3 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:23:28.001 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:28.259 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.260 malloc4 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.260 [2024-11-06 09:16:27.091519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:28.260 [2024-11-06 09:16:27.091575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.260 [2024-11-06 09:16:27.091597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:28.260 [2024-11-06 09:16:27.091608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.260 [2024-11-06 09:16:27.093967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.260 [2024-11-06 09:16:27.094002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:28.260 pt4 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.260 [2024-11-06 09:16:27.103533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:28.260 [2024-11-06 09:16:27.105673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:28.260 [2024-11-06 09:16:27.105743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:28.260 [2024-11-06 09:16:27.105807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:28.260 [2024-11-06 09:16:27.105999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:28.260 [2024-11-06 09:16:27.106017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:28.260 [2024-11-06 09:16:27.106315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:28.260 [2024-11-06 09:16:27.113852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:28.260 [2024-11-06 09:16:27.113881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:28.260 [2024-11-06 09:16:27.114089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.260 "name": "raid_bdev1", 00:23:28.260 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:28.260 "strip_size_kb": 64, 00:23:28.260 "state": "online", 00:23:28.260 "raid_level": "raid5f", 00:23:28.260 "superblock": true, 00:23:28.260 "num_base_bdevs": 4, 00:23:28.260 "num_base_bdevs_discovered": 4, 00:23:28.260 "num_base_bdevs_operational": 4, 00:23:28.260 "base_bdevs_list": [ 00:23:28.260 { 00:23:28.260 "name": "pt1", 00:23:28.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.260 "is_configured": true, 00:23:28.260 "data_offset": 2048, 00:23:28.260 "data_size": 63488 00:23:28.260 }, 00:23:28.260 { 00:23:28.260 "name": "pt2", 00:23:28.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.260 "is_configured": true, 00:23:28.260 "data_offset": 2048, 00:23:28.260 "data_size": 63488 00:23:28.260 }, 00:23:28.260 { 00:23:28.260 "name": "pt3", 00:23:28.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.260 "is_configured": true, 00:23:28.260 "data_offset": 2048, 00:23:28.260 "data_size": 63488 00:23:28.260 }, 00:23:28.260 { 00:23:28.260 "name": "pt4", 00:23:28.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:28.260 "is_configured": true, 00:23:28.260 "data_offset": 2048, 00:23:28.260 "data_size": 63488 00:23:28.260 } 00:23:28.260 ] 00:23:28.260 }' 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.260 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.519 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:28.519 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:28.519 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:28.519 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:28.519 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:28.519 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:28.778 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:28.778 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.778 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.778 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:28.778 [2024-11-06 09:16:27.566559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.778 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.778 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:28.779 "name": "raid_bdev1", 00:23:28.779 "aliases": [ 00:23:28.779 "eae2198d-c293-4078-8fb3-d47889acf84b" 00:23:28.779 ], 00:23:28.779 "product_name": "Raid Volume", 00:23:28.779 "block_size": 512, 00:23:28.779 "num_blocks": 190464, 00:23:28.779 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:28.779 "assigned_rate_limits": { 00:23:28.779 "rw_ios_per_sec": 0, 00:23:28.779 "rw_mbytes_per_sec": 0, 00:23:28.779 "r_mbytes_per_sec": 0, 00:23:28.779 "w_mbytes_per_sec": 0 00:23:28.779 }, 00:23:28.779 "claimed": false, 00:23:28.779 "zoned": false, 00:23:28.779 "supported_io_types": { 00:23:28.779 "read": true, 00:23:28.779 "write": true, 00:23:28.779 "unmap": false, 00:23:28.779 "flush": false, 00:23:28.779 "reset": true, 00:23:28.779 "nvme_admin": false, 00:23:28.779 "nvme_io": false, 00:23:28.779 "nvme_io_md": false, 00:23:28.779 "write_zeroes": true, 00:23:28.779 "zcopy": false, 00:23:28.779 "get_zone_info": false, 00:23:28.779 "zone_management": false, 00:23:28.779 "zone_append": false, 00:23:28.779 "compare": false, 00:23:28.779 "compare_and_write": false, 00:23:28.779 "abort": false, 00:23:28.779 "seek_hole": false, 00:23:28.779 "seek_data": false, 00:23:28.779 "copy": false, 00:23:28.779 "nvme_iov_md": false 00:23:28.779 }, 00:23:28.779 "driver_specific": { 00:23:28.779 "raid": { 00:23:28.779 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:28.779 "strip_size_kb": 64, 00:23:28.779 "state": "online", 00:23:28.779 "raid_level": "raid5f", 00:23:28.779 "superblock": true, 00:23:28.779 "num_base_bdevs": 4, 00:23:28.779 "num_base_bdevs_discovered": 4, 00:23:28.779 "num_base_bdevs_operational": 4, 00:23:28.779 "base_bdevs_list": [ 00:23:28.779 { 00:23:28.779 "name": "pt1", 00:23:28.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.779 "is_configured": true, 00:23:28.779 "data_offset": 2048, 00:23:28.779 "data_size": 63488 00:23:28.779 }, 00:23:28.779 { 00:23:28.779 "name": "pt2", 00:23:28.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.779 "is_configured": true, 00:23:28.779 "data_offset": 2048, 00:23:28.779 "data_size": 63488 00:23:28.779 }, 00:23:28.779 { 00:23:28.779 "name": "pt3", 00:23:28.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.779 "is_configured": true, 00:23:28.779 "data_offset": 2048, 00:23:28.779 "data_size": 63488 00:23:28.779 }, 00:23:28.779 { 00:23:28.779 "name": "pt4", 00:23:28.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:28.779 "is_configured": true, 00:23:28.779 "data_offset": 2048, 00:23:28.779 "data_size": 63488 00:23:28.779 } 00:23:28.779 ] 00:23:28.779 } 00:23:28.779 } 00:23:28.779 }' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:28.779 pt2 00:23:28.779 pt3 00:23:28.779 pt4' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.779 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.047 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:29.047 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:29.047 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 [2024-11-06 09:16:27.882506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eae2198d-c293-4078-8fb3-d47889acf84b 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eae2198d-c293-4078-8fb3-d47889acf84b ']' 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 [2024-11-06 09:16:27.922334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:29.048 [2024-11-06 09:16:27.922367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:29.048 [2024-11-06 09:16:27.922452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.048 [2024-11-06 09:16:27.922535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.048 [2024-11-06 09:16:27.922552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.048 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.048 [2024-11-06 09:16:28.078369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:29.323 [2024-11-06 09:16:28.080569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:29.323 [2024-11-06 09:16:28.080624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:29.323 [2024-11-06 09:16:28.080660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:29.323 [2024-11-06 09:16:28.080712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:29.323 [2024-11-06 09:16:28.080764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:29.323 [2024-11-06 09:16:28.080785] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:29.323 [2024-11-06 09:16:28.080806] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:29.323 [2024-11-06 09:16:28.080822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:29.324 [2024-11-06 09:16:28.080834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:29.324 request: 00:23:29.324 { 00:23:29.324 "name": "raid_bdev1", 00:23:29.324 "raid_level": "raid5f", 00:23:29.324 "base_bdevs": [ 00:23:29.324 "malloc1", 00:23:29.324 "malloc2", 00:23:29.324 "malloc3", 00:23:29.324 "malloc4" 00:23:29.324 ], 00:23:29.324 "strip_size_kb": 64, 00:23:29.324 "superblock": false, 00:23:29.324 "method": "bdev_raid_create", 00:23:29.324 "req_id": 1 00:23:29.324 } 00:23:29.324 Got JSON-RPC error response 00:23:29.324 response: 00:23:29.324 { 00:23:29.324 "code": -17, 00:23:29.324 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:29.324 } 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.324 [2024-11-06 09:16:28.134336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:29.324 [2024-11-06 09:16:28.134407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.324 [2024-11-06 09:16:28.134427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:29.324 [2024-11-06 09:16:28.134442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.324 [2024-11-06 09:16:28.136894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.324 [2024-11-06 09:16:28.136939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:29.324 [2024-11-06 09:16:28.137027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:29.324 [2024-11-06 09:16:28.137092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:29.324 pt1 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.324 "name": "raid_bdev1", 00:23:29.324 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:29.324 "strip_size_kb": 64, 00:23:29.324 "state": "configuring", 00:23:29.324 "raid_level": "raid5f", 00:23:29.324 "superblock": true, 00:23:29.324 "num_base_bdevs": 4, 00:23:29.324 "num_base_bdevs_discovered": 1, 00:23:29.324 "num_base_bdevs_operational": 4, 00:23:29.324 "base_bdevs_list": [ 00:23:29.324 { 00:23:29.324 "name": "pt1", 00:23:29.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:29.324 "is_configured": true, 00:23:29.324 "data_offset": 2048, 00:23:29.324 "data_size": 63488 00:23:29.324 }, 00:23:29.324 { 00:23:29.324 "name": null, 00:23:29.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.324 "is_configured": false, 00:23:29.324 "data_offset": 2048, 00:23:29.324 "data_size": 63488 00:23:29.324 }, 00:23:29.324 { 00:23:29.324 "name": null, 00:23:29.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:29.324 "is_configured": false, 00:23:29.324 "data_offset": 2048, 00:23:29.324 "data_size": 63488 00:23:29.324 }, 00:23:29.324 { 00:23:29.324 "name": null, 00:23:29.324 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:29.324 "is_configured": false, 00:23:29.324 "data_offset": 2048, 00:23:29.324 "data_size": 63488 00:23:29.324 } 00:23:29.324 ] 00:23:29.324 }' 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.324 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.583 [2024-11-06 09:16:28.542344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:29.583 [2024-11-06 09:16:28.542423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.583 [2024-11-06 09:16:28.542445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:29.583 [2024-11-06 09:16:28.542460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.583 [2024-11-06 09:16:28.542908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.583 [2024-11-06 09:16:28.542942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:29.583 [2024-11-06 09:16:28.543024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:29.583 [2024-11-06 09:16:28.543050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.583 pt2 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.583 [2024-11-06 09:16:28.554392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.583 "name": "raid_bdev1", 00:23:29.583 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:29.583 "strip_size_kb": 64, 00:23:29.583 "state": "configuring", 00:23:29.583 "raid_level": "raid5f", 00:23:29.583 "superblock": true, 00:23:29.583 "num_base_bdevs": 4, 00:23:29.583 "num_base_bdevs_discovered": 1, 00:23:29.583 "num_base_bdevs_operational": 4, 00:23:29.583 "base_bdevs_list": [ 00:23:29.583 { 00:23:29.583 "name": "pt1", 00:23:29.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:29.583 "is_configured": true, 00:23:29.583 "data_offset": 2048, 00:23:29.583 "data_size": 63488 00:23:29.583 }, 00:23:29.583 { 00:23:29.583 "name": null, 00:23:29.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.583 "is_configured": false, 00:23:29.583 "data_offset": 0, 00:23:29.583 "data_size": 63488 00:23:29.583 }, 00:23:29.583 { 00:23:29.583 "name": null, 00:23:29.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:29.583 "is_configured": false, 00:23:29.583 "data_offset": 2048, 00:23:29.583 "data_size": 63488 00:23:29.583 }, 00:23:29.583 { 00:23:29.583 "name": null, 00:23:29.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:29.583 "is_configured": false, 00:23:29.583 "data_offset": 2048, 00:23:29.583 "data_size": 63488 00:23:29.583 } 00:23:29.583 ] 00:23:29.583 }' 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.583 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:30.151 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:30.151 09:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:30.151 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.151 09:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 [2024-11-06 09:16:29.006348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:30.151 [2024-11-06 09:16:29.006416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.151 [2024-11-06 09:16:29.006441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:30.151 [2024-11-06 09:16:29.006453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.151 [2024-11-06 09:16:29.006927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.151 [2024-11-06 09:16:29.006945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:30.151 [2024-11-06 09:16:29.007037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:30.151 [2024-11-06 09:16:29.007059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:30.151 pt2 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 [2024-11-06 09:16:29.018345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:30.151 [2024-11-06 09:16:29.018399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.151 [2024-11-06 09:16:29.018424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:30.151 [2024-11-06 09:16:29.018436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.151 [2024-11-06 09:16:29.018873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.151 [2024-11-06 09:16:29.018891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:30.151 [2024-11-06 09:16:29.018970] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:30.151 [2024-11-06 09:16:29.018991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:30.151 pt3 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 [2024-11-06 09:16:29.030298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:30.151 [2024-11-06 09:16:29.030351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.151 [2024-11-06 09:16:29.030375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:30.151 [2024-11-06 09:16:29.030385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.151 [2024-11-06 09:16:29.030810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.151 [2024-11-06 09:16:29.030832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:30.151 [2024-11-06 09:16:29.030906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:30.151 [2024-11-06 09:16:29.030926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:30.151 [2024-11-06 09:16:29.031065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:30.151 [2024-11-06 09:16:29.031075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:30.151 [2024-11-06 09:16:29.031344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:30.151 [2024-11-06 09:16:29.038779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:30.151 [2024-11-06 09:16:29.038809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:30.151 [2024-11-06 09:16:29.039005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.151 pt4 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.151 "name": "raid_bdev1", 00:23:30.151 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:30.151 "strip_size_kb": 64, 00:23:30.151 "state": "online", 00:23:30.151 "raid_level": "raid5f", 00:23:30.151 "superblock": true, 00:23:30.151 "num_base_bdevs": 4, 00:23:30.151 "num_base_bdevs_discovered": 4, 00:23:30.151 "num_base_bdevs_operational": 4, 00:23:30.151 "base_bdevs_list": [ 00:23:30.151 { 00:23:30.151 "name": "pt1", 00:23:30.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:30.151 "is_configured": true, 00:23:30.151 "data_offset": 2048, 00:23:30.151 "data_size": 63488 00:23:30.151 }, 00:23:30.151 { 00:23:30.151 "name": "pt2", 00:23:30.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.151 "is_configured": true, 00:23:30.151 "data_offset": 2048, 00:23:30.151 "data_size": 63488 00:23:30.151 }, 00:23:30.151 { 00:23:30.151 "name": "pt3", 00:23:30.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.151 "is_configured": true, 00:23:30.151 "data_offset": 2048, 00:23:30.151 "data_size": 63488 00:23:30.151 }, 00:23:30.151 { 00:23:30.151 "name": "pt4", 00:23:30.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:30.151 "is_configured": true, 00:23:30.151 "data_offset": 2048, 00:23:30.151 "data_size": 63488 00:23:30.151 } 00:23:30.151 ] 00:23:30.151 }' 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.151 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:30.719 [2024-11-06 09:16:29.463408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.719 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.719 "name": "raid_bdev1", 00:23:30.719 "aliases": [ 00:23:30.719 "eae2198d-c293-4078-8fb3-d47889acf84b" 00:23:30.719 ], 00:23:30.719 "product_name": "Raid Volume", 00:23:30.719 "block_size": 512, 00:23:30.719 "num_blocks": 190464, 00:23:30.719 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:30.719 "assigned_rate_limits": { 00:23:30.719 "rw_ios_per_sec": 0, 00:23:30.719 "rw_mbytes_per_sec": 0, 00:23:30.719 "r_mbytes_per_sec": 0, 00:23:30.719 "w_mbytes_per_sec": 0 00:23:30.719 }, 00:23:30.719 "claimed": false, 00:23:30.719 "zoned": false, 00:23:30.719 "supported_io_types": { 00:23:30.719 "read": true, 00:23:30.719 "write": true, 00:23:30.719 "unmap": false, 00:23:30.719 "flush": false, 00:23:30.719 "reset": true, 00:23:30.719 "nvme_admin": false, 00:23:30.719 "nvme_io": false, 00:23:30.719 "nvme_io_md": false, 00:23:30.720 "write_zeroes": true, 00:23:30.720 "zcopy": false, 00:23:30.720 "get_zone_info": false, 00:23:30.720 "zone_management": false, 00:23:30.720 "zone_append": false, 00:23:30.720 "compare": false, 00:23:30.720 "compare_and_write": false, 00:23:30.720 "abort": false, 00:23:30.720 "seek_hole": false, 00:23:30.720 "seek_data": false, 00:23:30.720 "copy": false, 00:23:30.720 "nvme_iov_md": false 00:23:30.720 }, 00:23:30.720 "driver_specific": { 00:23:30.720 "raid": { 00:23:30.720 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:30.720 "strip_size_kb": 64, 00:23:30.720 "state": "online", 00:23:30.720 "raid_level": "raid5f", 00:23:30.720 "superblock": true, 00:23:30.720 "num_base_bdevs": 4, 00:23:30.720 "num_base_bdevs_discovered": 4, 00:23:30.720 "num_base_bdevs_operational": 4, 00:23:30.720 "base_bdevs_list": [ 00:23:30.720 { 00:23:30.720 "name": "pt1", 00:23:30.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:30.720 "is_configured": true, 00:23:30.720 "data_offset": 2048, 00:23:30.720 "data_size": 63488 00:23:30.720 }, 00:23:30.720 { 00:23:30.720 "name": "pt2", 00:23:30.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.720 "is_configured": true, 00:23:30.720 "data_offset": 2048, 00:23:30.720 "data_size": 63488 00:23:30.720 }, 00:23:30.720 { 00:23:30.720 "name": "pt3", 00:23:30.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.720 "is_configured": true, 00:23:30.720 "data_offset": 2048, 00:23:30.720 "data_size": 63488 00:23:30.720 }, 00:23:30.720 { 00:23:30.720 "name": "pt4", 00:23:30.720 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:30.720 "is_configured": true, 00:23:30.720 "data_offset": 2048, 00:23:30.720 "data_size": 63488 00:23:30.720 } 00:23:30.720 ] 00:23:30.720 } 00:23:30.720 } 00:23:30.720 }' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:30.720 pt2 00:23:30.720 pt3 00:23:30.720 pt4' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.720 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:30.980 [2024-11-06 09:16:29.778909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eae2198d-c293-4078-8fb3-d47889acf84b '!=' eae2198d-c293-4078-8fb3-d47889acf84b ']' 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.980 [2024-11-06 09:16:29.822701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.980 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.980 "name": "raid_bdev1", 00:23:30.980 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:30.980 "strip_size_kb": 64, 00:23:30.980 "state": "online", 00:23:30.980 "raid_level": "raid5f", 00:23:30.980 "superblock": true, 00:23:30.980 "num_base_bdevs": 4, 00:23:30.980 "num_base_bdevs_discovered": 3, 00:23:30.980 "num_base_bdevs_operational": 3, 00:23:30.980 "base_bdevs_list": [ 00:23:30.980 { 00:23:30.980 "name": null, 00:23:30.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.980 "is_configured": false, 00:23:30.980 "data_offset": 0, 00:23:30.980 "data_size": 63488 00:23:30.980 }, 00:23:30.980 { 00:23:30.980 "name": "pt2", 00:23:30.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.980 "is_configured": true, 00:23:30.980 "data_offset": 2048, 00:23:30.980 "data_size": 63488 00:23:30.980 }, 00:23:30.980 { 00:23:30.980 "name": "pt3", 00:23:30.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.980 "is_configured": true, 00:23:30.980 "data_offset": 2048, 00:23:30.980 "data_size": 63488 00:23:30.981 }, 00:23:30.981 { 00:23:30.981 "name": "pt4", 00:23:30.981 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:30.981 "is_configured": true, 00:23:30.981 "data_offset": 2048, 00:23:30.981 "data_size": 63488 00:23:30.981 } 00:23:30.981 ] 00:23:30.981 }' 00:23:30.981 09:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.981 09:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.239 [2024-11-06 09:16:30.254331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.239 [2024-11-06 09:16:30.254367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.239 [2024-11-06 09:16:30.254452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.239 [2024-11-06 09:16:30.254535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.239 [2024-11-06 09:16:30.254548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.239 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.510 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.510 [2024-11-06 09:16:30.338311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:31.510 [2024-11-06 09:16:30.338367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.510 [2024-11-06 09:16:30.338389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:31.511 [2024-11-06 09:16:30.338400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.511 [2024-11-06 09:16:30.340854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.511 [2024-11-06 09:16:30.341000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:31.511 [2024-11-06 09:16:30.341104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:31.511 [2024-11-06 09:16:30.341152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:31.511 pt2 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.511 "name": "raid_bdev1", 00:23:31.511 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:31.511 "strip_size_kb": 64, 00:23:31.511 "state": "configuring", 00:23:31.511 "raid_level": "raid5f", 00:23:31.511 "superblock": true, 00:23:31.511 "num_base_bdevs": 4, 00:23:31.511 "num_base_bdevs_discovered": 1, 00:23:31.511 "num_base_bdevs_operational": 3, 00:23:31.511 "base_bdevs_list": [ 00:23:31.511 { 00:23:31.511 "name": null, 00:23:31.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.511 "is_configured": false, 00:23:31.511 "data_offset": 2048, 00:23:31.511 "data_size": 63488 00:23:31.511 }, 00:23:31.511 { 00:23:31.511 "name": "pt2", 00:23:31.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.511 "is_configured": true, 00:23:31.511 "data_offset": 2048, 00:23:31.511 "data_size": 63488 00:23:31.511 }, 00:23:31.511 { 00:23:31.511 "name": null, 00:23:31.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:31.511 "is_configured": false, 00:23:31.511 "data_offset": 2048, 00:23:31.511 "data_size": 63488 00:23:31.511 }, 00:23:31.511 { 00:23:31.511 "name": null, 00:23:31.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:31.511 "is_configured": false, 00:23:31.511 "data_offset": 2048, 00:23:31.511 "data_size": 63488 00:23:31.511 } 00:23:31.511 ] 00:23:31.511 }' 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.511 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.770 [2024-11-06 09:16:30.718394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:31.770 [2024-11-06 09:16:30.718477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.770 [2024-11-06 09:16:30.718504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:31.770 [2024-11-06 09:16:30.718517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.770 [2024-11-06 09:16:30.719000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.770 [2024-11-06 09:16:30.719021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:31.770 [2024-11-06 09:16:30.719114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:31.770 [2024-11-06 09:16:30.719147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:31.770 pt3 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.770 "name": "raid_bdev1", 00:23:31.770 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:31.770 "strip_size_kb": 64, 00:23:31.770 "state": "configuring", 00:23:31.770 "raid_level": "raid5f", 00:23:31.770 "superblock": true, 00:23:31.770 "num_base_bdevs": 4, 00:23:31.770 "num_base_bdevs_discovered": 2, 00:23:31.770 "num_base_bdevs_operational": 3, 00:23:31.770 "base_bdevs_list": [ 00:23:31.770 { 00:23:31.770 "name": null, 00:23:31.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.770 "is_configured": false, 00:23:31.770 "data_offset": 2048, 00:23:31.770 "data_size": 63488 00:23:31.770 }, 00:23:31.770 { 00:23:31.770 "name": "pt2", 00:23:31.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.770 "is_configured": true, 00:23:31.770 "data_offset": 2048, 00:23:31.770 "data_size": 63488 00:23:31.770 }, 00:23:31.770 { 00:23:31.770 "name": "pt3", 00:23:31.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:31.770 "is_configured": true, 00:23:31.770 "data_offset": 2048, 00:23:31.770 "data_size": 63488 00:23:31.770 }, 00:23:31.770 { 00:23:31.770 "name": null, 00:23:31.770 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:31.770 "is_configured": false, 00:23:31.770 "data_offset": 2048, 00:23:31.770 "data_size": 63488 00:23:31.770 } 00:23:31.770 ] 00:23:31.770 }' 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.770 09:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.336 [2024-11-06 09:16:31.182405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:32.336 [2024-11-06 09:16:31.182642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.336 [2024-11-06 09:16:31.182679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:32.336 [2024-11-06 09:16:31.182693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.336 [2024-11-06 09:16:31.183187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.336 [2024-11-06 09:16:31.183208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:32.336 [2024-11-06 09:16:31.183328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:32.336 [2024-11-06 09:16:31.183367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:32.336 [2024-11-06 09:16:31.183530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:32.336 [2024-11-06 09:16:31.183541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:32.336 [2024-11-06 09:16:31.183813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:32.336 [2024-11-06 09:16:31.191490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:32.336 [2024-11-06 09:16:31.191549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:32.336 [2024-11-06 09:16:31.191944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.336 pt4 00:23:32.336 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.337 "name": "raid_bdev1", 00:23:32.337 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:32.337 "strip_size_kb": 64, 00:23:32.337 "state": "online", 00:23:32.337 "raid_level": "raid5f", 00:23:32.337 "superblock": true, 00:23:32.337 "num_base_bdevs": 4, 00:23:32.337 "num_base_bdevs_discovered": 3, 00:23:32.337 "num_base_bdevs_operational": 3, 00:23:32.337 "base_bdevs_list": [ 00:23:32.337 { 00:23:32.337 "name": null, 00:23:32.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.337 "is_configured": false, 00:23:32.337 "data_offset": 2048, 00:23:32.337 "data_size": 63488 00:23:32.337 }, 00:23:32.337 { 00:23:32.337 "name": "pt2", 00:23:32.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:32.337 "is_configured": true, 00:23:32.337 "data_offset": 2048, 00:23:32.337 "data_size": 63488 00:23:32.337 }, 00:23:32.337 { 00:23:32.337 "name": "pt3", 00:23:32.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:32.337 "is_configured": true, 00:23:32.337 "data_offset": 2048, 00:23:32.337 "data_size": 63488 00:23:32.337 }, 00:23:32.337 { 00:23:32.337 "name": "pt4", 00:23:32.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:32.337 "is_configured": true, 00:23:32.337 "data_offset": 2048, 00:23:32.337 "data_size": 63488 00:23:32.337 } 00:23:32.337 ] 00:23:32.337 }' 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.337 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.904 [2024-11-06 09:16:31.661572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.904 [2024-11-06 09:16:31.661613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.904 [2024-11-06 09:16:31.661702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.904 [2024-11-06 09:16:31.661787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.904 [2024-11-06 09:16:31.661805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.904 [2024-11-06 09:16:31.733480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.904 [2024-11-06 09:16:31.733575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.904 [2024-11-06 09:16:31.733610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:32.904 [2024-11-06 09:16:31.733627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.904 [2024-11-06 09:16:31.736693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.904 [2024-11-06 09:16:31.736754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.904 [2024-11-06 09:16:31.736863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:32.904 [2024-11-06 09:16:31.736932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:32.904 [2024-11-06 09:16:31.737097] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:32.904 [2024-11-06 09:16:31.737115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.904 [2024-11-06 09:16:31.737135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:32.904 [2024-11-06 09:16:31.737200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:32.904 [2024-11-06 09:16:31.737351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:32.904 pt1 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:32.904 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.905 "name": "raid_bdev1", 00:23:32.905 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:32.905 "strip_size_kb": 64, 00:23:32.905 "state": "configuring", 00:23:32.905 "raid_level": "raid5f", 00:23:32.905 "superblock": true, 00:23:32.905 "num_base_bdevs": 4, 00:23:32.905 "num_base_bdevs_discovered": 2, 00:23:32.905 "num_base_bdevs_operational": 3, 00:23:32.905 "base_bdevs_list": [ 00:23:32.905 { 00:23:32.905 "name": null, 00:23:32.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.905 "is_configured": false, 00:23:32.905 "data_offset": 2048, 00:23:32.905 "data_size": 63488 00:23:32.905 }, 00:23:32.905 { 00:23:32.905 "name": "pt2", 00:23:32.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:32.905 "is_configured": true, 00:23:32.905 "data_offset": 2048, 00:23:32.905 "data_size": 63488 00:23:32.905 }, 00:23:32.905 { 00:23:32.905 "name": "pt3", 00:23:32.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:32.905 "is_configured": true, 00:23:32.905 "data_offset": 2048, 00:23:32.905 "data_size": 63488 00:23:32.905 }, 00:23:32.905 { 00:23:32.905 "name": null, 00:23:32.905 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:32.905 "is_configured": false, 00:23:32.905 "data_offset": 2048, 00:23:32.905 "data_size": 63488 00:23:32.905 } 00:23:32.905 ] 00:23:32.905 }' 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.905 09:16:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.163 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:33.163 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.163 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:33.163 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.163 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.420 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.421 [2024-11-06 09:16:32.224898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:33.421 [2024-11-06 09:16:32.224988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.421 [2024-11-06 09:16:32.225021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:33.421 [2024-11-06 09:16:32.225035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.421 [2024-11-06 09:16:32.225567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.421 [2024-11-06 09:16:32.225591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:33.421 [2024-11-06 09:16:32.225697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:33.421 [2024-11-06 09:16:32.225740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:33.421 [2024-11-06 09:16:32.225906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:33.421 [2024-11-06 09:16:32.225918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:33.421 [2024-11-06 09:16:32.226246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:33.421 [2024-11-06 09:16:32.235127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:33.421 [2024-11-06 09:16:32.235182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:33.421 [2024-11-06 09:16:32.235600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.421 pt4 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.421 "name": "raid_bdev1", 00:23:33.421 "uuid": "eae2198d-c293-4078-8fb3-d47889acf84b", 00:23:33.421 "strip_size_kb": 64, 00:23:33.421 "state": "online", 00:23:33.421 "raid_level": "raid5f", 00:23:33.421 "superblock": true, 00:23:33.421 "num_base_bdevs": 4, 00:23:33.421 "num_base_bdevs_discovered": 3, 00:23:33.421 "num_base_bdevs_operational": 3, 00:23:33.421 "base_bdevs_list": [ 00:23:33.421 { 00:23:33.421 "name": null, 00:23:33.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.421 "is_configured": false, 00:23:33.421 "data_offset": 2048, 00:23:33.421 "data_size": 63488 00:23:33.421 }, 00:23:33.421 { 00:23:33.421 "name": "pt2", 00:23:33.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.421 "is_configured": true, 00:23:33.421 "data_offset": 2048, 00:23:33.421 "data_size": 63488 00:23:33.421 }, 00:23:33.421 { 00:23:33.421 "name": "pt3", 00:23:33.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:33.421 "is_configured": true, 00:23:33.421 "data_offset": 2048, 00:23:33.421 "data_size": 63488 00:23:33.421 }, 00:23:33.421 { 00:23:33.421 "name": "pt4", 00:23:33.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:33.421 "is_configured": true, 00:23:33.421 "data_offset": 2048, 00:23:33.421 "data_size": 63488 00:23:33.421 } 00:23:33.421 ] 00:23:33.421 }' 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.421 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.680 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:33.680 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:33.680 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.681 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.681 [2024-11-06 09:16:32.697156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eae2198d-c293-4078-8fb3-d47889acf84b '!=' eae2198d-c293-4078-8fb3-d47889acf84b ']' 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83822 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 83822 ']' 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 83822 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83822 00:23:33.939 killing process with pid 83822 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83822' 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 83822 00:23:33.939 [2024-11-06 09:16:32.779475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:33.939 [2024-11-06 09:16:32.779593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.939 [2024-11-06 09:16:32.779684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.939 [2024-11-06 09:16:32.779702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:33.939 09:16:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 83822 00:23:34.200 [2024-11-06 09:16:33.212814] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:35.581 ************************************ 00:23:35.581 END TEST raid5f_superblock_test 00:23:35.581 ************************************ 00:23:35.581 09:16:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:35.581 00:23:35.581 real 0m8.500s 00:23:35.581 user 0m13.179s 00:23:35.581 sys 0m1.813s 00:23:35.581 09:16:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:35.581 09:16:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.581 09:16:34 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:23:35.581 09:16:34 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:35.581 09:16:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:35.581 09:16:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:35.581 09:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:35.581 ************************************ 00:23:35.581 START TEST raid5f_rebuild_test 00:23:35.581 ************************************ 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84308 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84308 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84308 ']' 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:35.581 09:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.840 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:35.840 Zero copy mechanism will not be used. 00:23:35.840 [2024-11-06 09:16:34.620342] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:23:35.840 [2024-11-06 09:16:34.620500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84308 ] 00:23:35.840 [2024-11-06 09:16:34.805978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.097 [2024-11-06 09:16:34.931247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.355 [2024-11-06 09:16:35.151740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:36.355 [2024-11-06 09:16:35.152040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.613 BaseBdev1_malloc 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.613 [2024-11-06 09:16:35.549721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:36.613 [2024-11-06 09:16:35.549811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.613 [2024-11-06 09:16:35.549847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:36.613 [2024-11-06 09:16:35.549863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.613 [2024-11-06 09:16:35.552538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.613 [2024-11-06 09:16:35.552589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:36.613 BaseBdev1 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.613 BaseBdev2_malloc 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.613 [2024-11-06 09:16:35.608578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:36.613 [2024-11-06 09:16:35.608692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.613 [2024-11-06 09:16:35.608722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:36.613 [2024-11-06 09:16:35.608739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.613 [2024-11-06 09:16:35.611406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.613 [2024-11-06 09:16:35.611611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:36.613 BaseBdev2 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.613 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 BaseBdev3_malloc 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 [2024-11-06 09:16:35.677641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:36.872 [2024-11-06 09:16:35.677721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.872 [2024-11-06 09:16:35.677754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:36.872 [2024-11-06 09:16:35.677770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.872 [2024-11-06 09:16:35.680411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.872 [2024-11-06 09:16:35.680623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:36.872 BaseBdev3 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 BaseBdev4_malloc 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 [2024-11-06 09:16:35.736971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:36.872 [2024-11-06 09:16:35.737067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.872 [2024-11-06 09:16:35.737096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:36.872 [2024-11-06 09:16:35.737112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.872 [2024-11-06 09:16:35.739763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.872 [2024-11-06 09:16:35.739819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:36.872 BaseBdev4 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 spare_malloc 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 spare_delay 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 [2024-11-06 09:16:35.807989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:36.872 [2024-11-06 09:16:35.808079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.872 [2024-11-06 09:16:35.808111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:36.872 [2024-11-06 09:16:35.808126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.872 [2024-11-06 09:16:35.810832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.872 [2024-11-06 09:16:35.811037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:36.872 spare 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 [2024-11-06 09:16:35.820086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:36.872 [2024-11-06 09:16:35.822445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:36.872 [2024-11-06 09:16:35.822520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.872 [2024-11-06 09:16:35.822577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:36.872 [2024-11-06 09:16:35.822689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:36.872 [2024-11-06 09:16:35.822706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:36.872 [2024-11-06 09:16:35.823032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:36.872 [2024-11-06 09:16:35.831515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:36.872 [2024-11-06 09:16:35.831555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:36.872 [2024-11-06 09:16:35.831880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.872 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.872 "name": "raid_bdev1", 00:23:36.872 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:36.872 "strip_size_kb": 64, 00:23:36.872 "state": "online", 00:23:36.872 "raid_level": "raid5f", 00:23:36.872 "superblock": false, 00:23:36.872 "num_base_bdevs": 4, 00:23:36.872 "num_base_bdevs_discovered": 4, 00:23:36.872 "num_base_bdevs_operational": 4, 00:23:36.872 "base_bdevs_list": [ 00:23:36.872 { 00:23:36.872 "name": "BaseBdev1", 00:23:36.872 "uuid": "f0b78eaa-97f1-59bc-a69a-299641dfeeb9", 00:23:36.872 "is_configured": true, 00:23:36.872 "data_offset": 0, 00:23:36.872 "data_size": 65536 00:23:36.872 }, 00:23:36.873 { 00:23:36.873 "name": "BaseBdev2", 00:23:36.873 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:36.873 "is_configured": true, 00:23:36.873 "data_offset": 0, 00:23:36.873 "data_size": 65536 00:23:36.873 }, 00:23:36.873 { 00:23:36.873 "name": "BaseBdev3", 00:23:36.873 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:36.873 "is_configured": true, 00:23:36.873 "data_offset": 0, 00:23:36.873 "data_size": 65536 00:23:36.873 }, 00:23:36.873 { 00:23:36.873 "name": "BaseBdev4", 00:23:36.873 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:36.873 "is_configured": true, 00:23:36.873 "data_offset": 0, 00:23:36.873 "data_size": 65536 00:23:36.873 } 00:23:36.873 ] 00:23:36.873 }' 00:23:36.873 09:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.873 09:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.438 [2024-11-06 09:16:36.284568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:37.438 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:37.697 [2024-11-06 09:16:36.579961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:37.697 /dev/nbd0 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:37.697 1+0 records in 00:23:37.697 1+0 records out 00:23:37.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409255 s, 10.0 MB/s 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:37.697 09:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:38.263 512+0 records in 00:23:38.263 512+0 records out 00:23:38.263 100663296 bytes (101 MB, 96 MiB) copied, 0.543655 s, 185 MB/s 00:23:38.263 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:38.263 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:38.264 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:38.264 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:38.264 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:38.264 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:38.264 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:38.521 [2024-11-06 09:16:37.442842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.521 [2024-11-06 09:16:37.497350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.521 "name": "raid_bdev1", 00:23:38.521 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:38.521 "strip_size_kb": 64, 00:23:38.521 "state": "online", 00:23:38.521 "raid_level": "raid5f", 00:23:38.521 "superblock": false, 00:23:38.521 "num_base_bdevs": 4, 00:23:38.521 "num_base_bdevs_discovered": 3, 00:23:38.521 "num_base_bdevs_operational": 3, 00:23:38.521 "base_bdevs_list": [ 00:23:38.521 { 00:23:38.521 "name": null, 00:23:38.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.521 "is_configured": false, 00:23:38.521 "data_offset": 0, 00:23:38.521 "data_size": 65536 00:23:38.521 }, 00:23:38.521 { 00:23:38.521 "name": "BaseBdev2", 00:23:38.521 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:38.521 "is_configured": true, 00:23:38.521 "data_offset": 0, 00:23:38.521 "data_size": 65536 00:23:38.521 }, 00:23:38.521 { 00:23:38.521 "name": "BaseBdev3", 00:23:38.521 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:38.521 "is_configured": true, 00:23:38.521 "data_offset": 0, 00:23:38.521 "data_size": 65536 00:23:38.521 }, 00:23:38.521 { 00:23:38.521 "name": "BaseBdev4", 00:23:38.521 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:38.521 "is_configured": true, 00:23:38.521 "data_offset": 0, 00:23:38.521 "data_size": 65536 00:23:38.521 } 00:23:38.521 ] 00:23:38.521 }' 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.521 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.085 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.085 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.085 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.085 [2024-11-06 09:16:37.940694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.085 [2024-11-06 09:16:37.960207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:23:39.085 09:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.085 09:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:39.085 [2024-11-06 09:16:37.971848] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.016 09:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.016 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.016 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.016 "name": "raid_bdev1", 00:23:40.016 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:40.016 "strip_size_kb": 64, 00:23:40.016 "state": "online", 00:23:40.016 "raid_level": "raid5f", 00:23:40.016 "superblock": false, 00:23:40.016 "num_base_bdevs": 4, 00:23:40.016 "num_base_bdevs_discovered": 4, 00:23:40.016 "num_base_bdevs_operational": 4, 00:23:40.016 "process": { 00:23:40.016 "type": "rebuild", 00:23:40.016 "target": "spare", 00:23:40.016 "progress": { 00:23:40.016 "blocks": 19200, 00:23:40.016 "percent": 9 00:23:40.016 } 00:23:40.016 }, 00:23:40.016 "base_bdevs_list": [ 00:23:40.016 { 00:23:40.016 "name": "spare", 00:23:40.016 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:40.016 "is_configured": true, 00:23:40.016 "data_offset": 0, 00:23:40.016 "data_size": 65536 00:23:40.016 }, 00:23:40.016 { 00:23:40.016 "name": "BaseBdev2", 00:23:40.016 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:40.016 "is_configured": true, 00:23:40.016 "data_offset": 0, 00:23:40.016 "data_size": 65536 00:23:40.016 }, 00:23:40.016 { 00:23:40.016 "name": "BaseBdev3", 00:23:40.016 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:40.016 "is_configured": true, 00:23:40.016 "data_offset": 0, 00:23:40.016 "data_size": 65536 00:23:40.016 }, 00:23:40.016 { 00:23:40.016 "name": "BaseBdev4", 00:23:40.016 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:40.016 "is_configured": true, 00:23:40.016 "data_offset": 0, 00:23:40.016 "data_size": 65536 00:23:40.016 } 00:23:40.016 ] 00:23:40.016 }' 00:23:40.016 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.273 [2024-11-06 09:16:39.124067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.273 [2024-11-06 09:16:39.181929] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:40.273 [2024-11-06 09:16:39.182043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.273 [2024-11-06 09:16:39.182065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.273 [2024-11-06 09:16:39.182078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.273 "name": "raid_bdev1", 00:23:40.273 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:40.273 "strip_size_kb": 64, 00:23:40.273 "state": "online", 00:23:40.273 "raid_level": "raid5f", 00:23:40.273 "superblock": false, 00:23:40.273 "num_base_bdevs": 4, 00:23:40.273 "num_base_bdevs_discovered": 3, 00:23:40.273 "num_base_bdevs_operational": 3, 00:23:40.273 "base_bdevs_list": [ 00:23:40.273 { 00:23:40.273 "name": null, 00:23:40.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.273 "is_configured": false, 00:23:40.273 "data_offset": 0, 00:23:40.273 "data_size": 65536 00:23:40.273 }, 00:23:40.273 { 00:23:40.273 "name": "BaseBdev2", 00:23:40.273 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:40.273 "is_configured": true, 00:23:40.273 "data_offset": 0, 00:23:40.273 "data_size": 65536 00:23:40.273 }, 00:23:40.273 { 00:23:40.273 "name": "BaseBdev3", 00:23:40.273 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:40.273 "is_configured": true, 00:23:40.273 "data_offset": 0, 00:23:40.273 "data_size": 65536 00:23:40.273 }, 00:23:40.273 { 00:23:40.273 "name": "BaseBdev4", 00:23:40.273 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:40.273 "is_configured": true, 00:23:40.273 "data_offset": 0, 00:23:40.273 "data_size": 65536 00:23:40.273 } 00:23:40.273 ] 00:23:40.273 }' 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.273 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.836 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.836 "name": "raid_bdev1", 00:23:40.836 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:40.836 "strip_size_kb": 64, 00:23:40.836 "state": "online", 00:23:40.836 "raid_level": "raid5f", 00:23:40.836 "superblock": false, 00:23:40.836 "num_base_bdevs": 4, 00:23:40.836 "num_base_bdevs_discovered": 3, 00:23:40.836 "num_base_bdevs_operational": 3, 00:23:40.836 "base_bdevs_list": [ 00:23:40.836 { 00:23:40.836 "name": null, 00:23:40.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.836 "is_configured": false, 00:23:40.836 "data_offset": 0, 00:23:40.836 "data_size": 65536 00:23:40.836 }, 00:23:40.836 { 00:23:40.836 "name": "BaseBdev2", 00:23:40.836 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:40.836 "is_configured": true, 00:23:40.836 "data_offset": 0, 00:23:40.836 "data_size": 65536 00:23:40.836 }, 00:23:40.836 { 00:23:40.836 "name": "BaseBdev3", 00:23:40.836 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:40.836 "is_configured": true, 00:23:40.836 "data_offset": 0, 00:23:40.837 "data_size": 65536 00:23:40.837 }, 00:23:40.837 { 00:23:40.837 "name": "BaseBdev4", 00:23:40.837 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:40.837 "is_configured": true, 00:23:40.837 "data_offset": 0, 00:23:40.837 "data_size": 65536 00:23:40.837 } 00:23:40.837 ] 00:23:40.837 }' 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.837 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.837 [2024-11-06 09:16:39.862366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.093 [2024-11-06 09:16:39.880386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:23:41.094 09:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.094 09:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:41.094 [2024-11-06 09:16:39.891673] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.027 "name": "raid_bdev1", 00:23:42.027 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:42.027 "strip_size_kb": 64, 00:23:42.027 "state": "online", 00:23:42.027 "raid_level": "raid5f", 00:23:42.027 "superblock": false, 00:23:42.027 "num_base_bdevs": 4, 00:23:42.027 "num_base_bdevs_discovered": 4, 00:23:42.027 "num_base_bdevs_operational": 4, 00:23:42.027 "process": { 00:23:42.027 "type": "rebuild", 00:23:42.027 "target": "spare", 00:23:42.027 "progress": { 00:23:42.027 "blocks": 17280, 00:23:42.027 "percent": 8 00:23:42.027 } 00:23:42.027 }, 00:23:42.027 "base_bdevs_list": [ 00:23:42.027 { 00:23:42.027 "name": "spare", 00:23:42.027 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:42.027 "is_configured": true, 00:23:42.027 "data_offset": 0, 00:23:42.027 "data_size": 65536 00:23:42.027 }, 00:23:42.027 { 00:23:42.027 "name": "BaseBdev2", 00:23:42.027 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:42.027 "is_configured": true, 00:23:42.027 "data_offset": 0, 00:23:42.027 "data_size": 65536 00:23:42.027 }, 00:23:42.027 { 00:23:42.027 "name": "BaseBdev3", 00:23:42.027 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:42.027 "is_configured": true, 00:23:42.027 "data_offset": 0, 00:23:42.027 "data_size": 65536 00:23:42.027 }, 00:23:42.027 { 00:23:42.027 "name": "BaseBdev4", 00:23:42.027 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:42.027 "is_configured": true, 00:23:42.027 "data_offset": 0, 00:23:42.027 "data_size": 65536 00:23:42.027 } 00:23:42.027 ] 00:23:42.027 }' 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.027 09:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=616 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.027 09:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.286 "name": "raid_bdev1", 00:23:42.286 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:42.286 "strip_size_kb": 64, 00:23:42.286 "state": "online", 00:23:42.286 "raid_level": "raid5f", 00:23:42.286 "superblock": false, 00:23:42.286 "num_base_bdevs": 4, 00:23:42.286 "num_base_bdevs_discovered": 4, 00:23:42.286 "num_base_bdevs_operational": 4, 00:23:42.286 "process": { 00:23:42.286 "type": "rebuild", 00:23:42.286 "target": "spare", 00:23:42.286 "progress": { 00:23:42.286 "blocks": 21120, 00:23:42.286 "percent": 10 00:23:42.286 } 00:23:42.286 }, 00:23:42.286 "base_bdevs_list": [ 00:23:42.286 { 00:23:42.286 "name": "spare", 00:23:42.286 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:42.286 "is_configured": true, 00:23:42.286 "data_offset": 0, 00:23:42.286 "data_size": 65536 00:23:42.286 }, 00:23:42.286 { 00:23:42.286 "name": "BaseBdev2", 00:23:42.286 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:42.286 "is_configured": true, 00:23:42.286 "data_offset": 0, 00:23:42.286 "data_size": 65536 00:23:42.286 }, 00:23:42.286 { 00:23:42.286 "name": "BaseBdev3", 00:23:42.286 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:42.286 "is_configured": true, 00:23:42.286 "data_offset": 0, 00:23:42.286 "data_size": 65536 00:23:42.286 }, 00:23:42.286 { 00:23:42.286 "name": "BaseBdev4", 00:23:42.286 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:42.286 "is_configured": true, 00:23:42.286 "data_offset": 0, 00:23:42.286 "data_size": 65536 00:23:42.286 } 00:23:42.286 ] 00:23:42.286 }' 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.286 09:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.222 "name": "raid_bdev1", 00:23:43.222 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:43.222 "strip_size_kb": 64, 00:23:43.222 "state": "online", 00:23:43.222 "raid_level": "raid5f", 00:23:43.222 "superblock": false, 00:23:43.222 "num_base_bdevs": 4, 00:23:43.222 "num_base_bdevs_discovered": 4, 00:23:43.222 "num_base_bdevs_operational": 4, 00:23:43.222 "process": { 00:23:43.222 "type": "rebuild", 00:23:43.222 "target": "spare", 00:23:43.222 "progress": { 00:23:43.222 "blocks": 42240, 00:23:43.222 "percent": 21 00:23:43.222 } 00:23:43.222 }, 00:23:43.222 "base_bdevs_list": [ 00:23:43.222 { 00:23:43.222 "name": "spare", 00:23:43.222 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:43.222 "is_configured": true, 00:23:43.222 "data_offset": 0, 00:23:43.222 "data_size": 65536 00:23:43.222 }, 00:23:43.222 { 00:23:43.222 "name": "BaseBdev2", 00:23:43.222 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:43.222 "is_configured": true, 00:23:43.222 "data_offset": 0, 00:23:43.222 "data_size": 65536 00:23:43.222 }, 00:23:43.222 { 00:23:43.222 "name": "BaseBdev3", 00:23:43.222 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:43.222 "is_configured": true, 00:23:43.222 "data_offset": 0, 00:23:43.222 "data_size": 65536 00:23:43.222 }, 00:23:43.222 { 00:23:43.222 "name": "BaseBdev4", 00:23:43.222 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:43.222 "is_configured": true, 00:23:43.222 "data_offset": 0, 00:23:43.222 "data_size": 65536 00:23:43.222 } 00:23:43.222 ] 00:23:43.222 }' 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.222 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.481 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.481 09:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.453 "name": "raid_bdev1", 00:23:44.453 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:44.453 "strip_size_kb": 64, 00:23:44.453 "state": "online", 00:23:44.453 "raid_level": "raid5f", 00:23:44.453 "superblock": false, 00:23:44.453 "num_base_bdevs": 4, 00:23:44.453 "num_base_bdevs_discovered": 4, 00:23:44.453 "num_base_bdevs_operational": 4, 00:23:44.453 "process": { 00:23:44.453 "type": "rebuild", 00:23:44.453 "target": "spare", 00:23:44.453 "progress": { 00:23:44.453 "blocks": 65280, 00:23:44.453 "percent": 33 00:23:44.453 } 00:23:44.453 }, 00:23:44.453 "base_bdevs_list": [ 00:23:44.453 { 00:23:44.453 "name": "spare", 00:23:44.453 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:44.453 "is_configured": true, 00:23:44.453 "data_offset": 0, 00:23:44.453 "data_size": 65536 00:23:44.453 }, 00:23:44.453 { 00:23:44.453 "name": "BaseBdev2", 00:23:44.453 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:44.453 "is_configured": true, 00:23:44.453 "data_offset": 0, 00:23:44.453 "data_size": 65536 00:23:44.453 }, 00:23:44.453 { 00:23:44.453 "name": "BaseBdev3", 00:23:44.453 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:44.453 "is_configured": true, 00:23:44.453 "data_offset": 0, 00:23:44.453 "data_size": 65536 00:23:44.453 }, 00:23:44.453 { 00:23:44.453 "name": "BaseBdev4", 00:23:44.453 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:44.453 "is_configured": true, 00:23:44.453 "data_offset": 0, 00:23:44.453 "data_size": 65536 00:23:44.453 } 00:23:44.453 ] 00:23:44.453 }' 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.453 09:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.827 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.827 "name": "raid_bdev1", 00:23:45.827 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:45.827 "strip_size_kb": 64, 00:23:45.827 "state": "online", 00:23:45.827 "raid_level": "raid5f", 00:23:45.827 "superblock": false, 00:23:45.827 "num_base_bdevs": 4, 00:23:45.827 "num_base_bdevs_discovered": 4, 00:23:45.827 "num_base_bdevs_operational": 4, 00:23:45.827 "process": { 00:23:45.827 "type": "rebuild", 00:23:45.827 "target": "spare", 00:23:45.827 "progress": { 00:23:45.827 "blocks": 86400, 00:23:45.827 "percent": 43 00:23:45.827 } 00:23:45.827 }, 00:23:45.827 "base_bdevs_list": [ 00:23:45.827 { 00:23:45.827 "name": "spare", 00:23:45.827 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:45.827 "is_configured": true, 00:23:45.827 "data_offset": 0, 00:23:45.827 "data_size": 65536 00:23:45.827 }, 00:23:45.827 { 00:23:45.827 "name": "BaseBdev2", 00:23:45.827 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:45.827 "is_configured": true, 00:23:45.827 "data_offset": 0, 00:23:45.827 "data_size": 65536 00:23:45.827 }, 00:23:45.827 { 00:23:45.827 "name": "BaseBdev3", 00:23:45.827 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:45.827 "is_configured": true, 00:23:45.827 "data_offset": 0, 00:23:45.827 "data_size": 65536 00:23:45.827 }, 00:23:45.827 { 00:23:45.827 "name": "BaseBdev4", 00:23:45.827 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:45.828 "is_configured": true, 00:23:45.828 "data_offset": 0, 00:23:45.828 "data_size": 65536 00:23:45.828 } 00:23:45.828 ] 00:23:45.828 }' 00:23:45.828 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.828 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.828 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.828 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.828 09:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.762 "name": "raid_bdev1", 00:23:46.762 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:46.762 "strip_size_kb": 64, 00:23:46.762 "state": "online", 00:23:46.762 "raid_level": "raid5f", 00:23:46.762 "superblock": false, 00:23:46.762 "num_base_bdevs": 4, 00:23:46.762 "num_base_bdevs_discovered": 4, 00:23:46.762 "num_base_bdevs_operational": 4, 00:23:46.762 "process": { 00:23:46.762 "type": "rebuild", 00:23:46.762 "target": "spare", 00:23:46.762 "progress": { 00:23:46.762 "blocks": 107520, 00:23:46.762 "percent": 54 00:23:46.762 } 00:23:46.762 }, 00:23:46.762 "base_bdevs_list": [ 00:23:46.762 { 00:23:46.762 "name": "spare", 00:23:46.762 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:46.762 "is_configured": true, 00:23:46.762 "data_offset": 0, 00:23:46.762 "data_size": 65536 00:23:46.762 }, 00:23:46.762 { 00:23:46.762 "name": "BaseBdev2", 00:23:46.762 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:46.762 "is_configured": true, 00:23:46.762 "data_offset": 0, 00:23:46.762 "data_size": 65536 00:23:46.762 }, 00:23:46.762 { 00:23:46.762 "name": "BaseBdev3", 00:23:46.762 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:46.762 "is_configured": true, 00:23:46.762 "data_offset": 0, 00:23:46.762 "data_size": 65536 00:23:46.762 }, 00:23:46.762 { 00:23:46.762 "name": "BaseBdev4", 00:23:46.762 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:46.762 "is_configured": true, 00:23:46.762 "data_offset": 0, 00:23:46.762 "data_size": 65536 00:23:46.762 } 00:23:46.762 ] 00:23:46.762 }' 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.762 09:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:48.136 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:48.136 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.136 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.136 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.137 "name": "raid_bdev1", 00:23:48.137 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:48.137 "strip_size_kb": 64, 00:23:48.137 "state": "online", 00:23:48.137 "raid_level": "raid5f", 00:23:48.137 "superblock": false, 00:23:48.137 "num_base_bdevs": 4, 00:23:48.137 "num_base_bdevs_discovered": 4, 00:23:48.137 "num_base_bdevs_operational": 4, 00:23:48.137 "process": { 00:23:48.137 "type": "rebuild", 00:23:48.137 "target": "spare", 00:23:48.137 "progress": { 00:23:48.137 "blocks": 130560, 00:23:48.137 "percent": 66 00:23:48.137 } 00:23:48.137 }, 00:23:48.137 "base_bdevs_list": [ 00:23:48.137 { 00:23:48.137 "name": "spare", 00:23:48.137 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:48.137 "is_configured": true, 00:23:48.137 "data_offset": 0, 00:23:48.137 "data_size": 65536 00:23:48.137 }, 00:23:48.137 { 00:23:48.137 "name": "BaseBdev2", 00:23:48.137 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:48.137 "is_configured": true, 00:23:48.137 "data_offset": 0, 00:23:48.137 "data_size": 65536 00:23:48.137 }, 00:23:48.137 { 00:23:48.137 "name": "BaseBdev3", 00:23:48.137 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:48.137 "is_configured": true, 00:23:48.137 "data_offset": 0, 00:23:48.137 "data_size": 65536 00:23:48.137 }, 00:23:48.137 { 00:23:48.137 "name": "BaseBdev4", 00:23:48.137 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:48.137 "is_configured": true, 00:23:48.137 "data_offset": 0, 00:23:48.137 "data_size": 65536 00:23:48.137 } 00:23:48.137 ] 00:23:48.137 }' 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.137 09:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.072 "name": "raid_bdev1", 00:23:49.072 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:49.072 "strip_size_kb": 64, 00:23:49.072 "state": "online", 00:23:49.072 "raid_level": "raid5f", 00:23:49.072 "superblock": false, 00:23:49.072 "num_base_bdevs": 4, 00:23:49.072 "num_base_bdevs_discovered": 4, 00:23:49.072 "num_base_bdevs_operational": 4, 00:23:49.072 "process": { 00:23:49.072 "type": "rebuild", 00:23:49.072 "target": "spare", 00:23:49.072 "progress": { 00:23:49.072 "blocks": 151680, 00:23:49.072 "percent": 77 00:23:49.072 } 00:23:49.072 }, 00:23:49.072 "base_bdevs_list": [ 00:23:49.072 { 00:23:49.072 "name": "spare", 00:23:49.072 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:49.072 "is_configured": true, 00:23:49.072 "data_offset": 0, 00:23:49.072 "data_size": 65536 00:23:49.072 }, 00:23:49.072 { 00:23:49.072 "name": "BaseBdev2", 00:23:49.072 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:49.072 "is_configured": true, 00:23:49.072 "data_offset": 0, 00:23:49.072 "data_size": 65536 00:23:49.072 }, 00:23:49.072 { 00:23:49.072 "name": "BaseBdev3", 00:23:49.072 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:49.072 "is_configured": true, 00:23:49.072 "data_offset": 0, 00:23:49.072 "data_size": 65536 00:23:49.072 }, 00:23:49.072 { 00:23:49.072 "name": "BaseBdev4", 00:23:49.072 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:49.072 "is_configured": true, 00:23:49.072 "data_offset": 0, 00:23:49.072 "data_size": 65536 00:23:49.072 } 00:23:49.072 ] 00:23:49.072 }' 00:23:49.072 09:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.072 09:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.072 09:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.072 09:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.072 09:16:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:50.007 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:50.007 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.007 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.007 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:50.007 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:50.007 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.266 "name": "raid_bdev1", 00:23:50.266 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:50.266 "strip_size_kb": 64, 00:23:50.266 "state": "online", 00:23:50.266 "raid_level": "raid5f", 00:23:50.266 "superblock": false, 00:23:50.266 "num_base_bdevs": 4, 00:23:50.266 "num_base_bdevs_discovered": 4, 00:23:50.266 "num_base_bdevs_operational": 4, 00:23:50.266 "process": { 00:23:50.266 "type": "rebuild", 00:23:50.266 "target": "spare", 00:23:50.266 "progress": { 00:23:50.266 "blocks": 172800, 00:23:50.266 "percent": 87 00:23:50.266 } 00:23:50.266 }, 00:23:50.266 "base_bdevs_list": [ 00:23:50.266 { 00:23:50.266 "name": "spare", 00:23:50.266 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:50.266 "is_configured": true, 00:23:50.266 "data_offset": 0, 00:23:50.266 "data_size": 65536 00:23:50.266 }, 00:23:50.266 { 00:23:50.266 "name": "BaseBdev2", 00:23:50.266 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:50.266 "is_configured": true, 00:23:50.266 "data_offset": 0, 00:23:50.266 "data_size": 65536 00:23:50.266 }, 00:23:50.266 { 00:23:50.266 "name": "BaseBdev3", 00:23:50.266 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:50.266 "is_configured": true, 00:23:50.266 "data_offset": 0, 00:23:50.266 "data_size": 65536 00:23:50.266 }, 00:23:50.266 { 00:23:50.266 "name": "BaseBdev4", 00:23:50.266 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:50.266 "is_configured": true, 00:23:50.266 "data_offset": 0, 00:23:50.266 "data_size": 65536 00:23:50.266 } 00:23:50.266 ] 00:23:50.266 }' 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.266 09:16:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.202 "name": "raid_bdev1", 00:23:51.202 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:51.202 "strip_size_kb": 64, 00:23:51.202 "state": "online", 00:23:51.202 "raid_level": "raid5f", 00:23:51.202 "superblock": false, 00:23:51.202 "num_base_bdevs": 4, 00:23:51.202 "num_base_bdevs_discovered": 4, 00:23:51.202 "num_base_bdevs_operational": 4, 00:23:51.202 "process": { 00:23:51.202 "type": "rebuild", 00:23:51.202 "target": "spare", 00:23:51.202 "progress": { 00:23:51.202 "blocks": 195840, 00:23:51.202 "percent": 99 00:23:51.202 } 00:23:51.202 }, 00:23:51.202 "base_bdevs_list": [ 00:23:51.202 { 00:23:51.202 "name": "spare", 00:23:51.202 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:51.202 "is_configured": true, 00:23:51.202 "data_offset": 0, 00:23:51.202 "data_size": 65536 00:23:51.202 }, 00:23:51.202 { 00:23:51.202 "name": "BaseBdev2", 00:23:51.202 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:51.202 "is_configured": true, 00:23:51.202 "data_offset": 0, 00:23:51.202 "data_size": 65536 00:23:51.202 }, 00:23:51.202 { 00:23:51.202 "name": "BaseBdev3", 00:23:51.202 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:51.202 "is_configured": true, 00:23:51.202 "data_offset": 0, 00:23:51.202 "data_size": 65536 00:23:51.202 }, 00:23:51.202 { 00:23:51.202 "name": "BaseBdev4", 00:23:51.202 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:51.202 "is_configured": true, 00:23:51.202 "data_offset": 0, 00:23:51.202 "data_size": 65536 00:23:51.202 } 00:23:51.202 ] 00:23:51.202 }' 00:23:51.202 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.461 [2024-11-06 09:16:50.273407] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:51.462 [2024-11-06 09:16:50.273767] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:51.462 [2024-11-06 09:16:50.273849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.462 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.462 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.462 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.462 09:16:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:52.448 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.449 "name": "raid_bdev1", 00:23:52.449 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:52.449 "strip_size_kb": 64, 00:23:52.449 "state": "online", 00:23:52.449 "raid_level": "raid5f", 00:23:52.449 "superblock": false, 00:23:52.449 "num_base_bdevs": 4, 00:23:52.449 "num_base_bdevs_discovered": 4, 00:23:52.449 "num_base_bdevs_operational": 4, 00:23:52.449 "base_bdevs_list": [ 00:23:52.449 { 00:23:52.449 "name": "spare", 00:23:52.449 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:52.449 "is_configured": true, 00:23:52.449 "data_offset": 0, 00:23:52.449 "data_size": 65536 00:23:52.449 }, 00:23:52.449 { 00:23:52.449 "name": "BaseBdev2", 00:23:52.449 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:52.449 "is_configured": true, 00:23:52.449 "data_offset": 0, 00:23:52.449 "data_size": 65536 00:23:52.449 }, 00:23:52.449 { 00:23:52.449 "name": "BaseBdev3", 00:23:52.449 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:52.449 "is_configured": true, 00:23:52.449 "data_offset": 0, 00:23:52.449 "data_size": 65536 00:23:52.449 }, 00:23:52.449 { 00:23:52.449 "name": "BaseBdev4", 00:23:52.449 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:52.449 "is_configured": true, 00:23:52.449 "data_offset": 0, 00:23:52.449 "data_size": 65536 00:23:52.449 } 00:23:52.449 ] 00:23:52.449 }' 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.449 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.708 "name": "raid_bdev1", 00:23:52.708 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:52.708 "strip_size_kb": 64, 00:23:52.708 "state": "online", 00:23:52.708 "raid_level": "raid5f", 00:23:52.708 "superblock": false, 00:23:52.708 "num_base_bdevs": 4, 00:23:52.708 "num_base_bdevs_discovered": 4, 00:23:52.708 "num_base_bdevs_operational": 4, 00:23:52.708 "base_bdevs_list": [ 00:23:52.708 { 00:23:52.708 "name": "spare", 00:23:52.708 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:52.708 "is_configured": true, 00:23:52.708 "data_offset": 0, 00:23:52.708 "data_size": 65536 00:23:52.708 }, 00:23:52.708 { 00:23:52.708 "name": "BaseBdev2", 00:23:52.708 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:52.708 "is_configured": true, 00:23:52.708 "data_offset": 0, 00:23:52.708 "data_size": 65536 00:23:52.708 }, 00:23:52.708 { 00:23:52.708 "name": "BaseBdev3", 00:23:52.708 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:52.708 "is_configured": true, 00:23:52.708 "data_offset": 0, 00:23:52.708 "data_size": 65536 00:23:52.708 }, 00:23:52.708 { 00:23:52.708 "name": "BaseBdev4", 00:23:52.708 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:52.708 "is_configured": true, 00:23:52.708 "data_offset": 0, 00:23:52.708 "data_size": 65536 00:23:52.708 } 00:23:52.708 ] 00:23:52.708 }' 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.708 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.708 "name": "raid_bdev1", 00:23:52.708 "uuid": "df8b8e83-dac5-4af7-ba0c-00c8f35575ea", 00:23:52.708 "strip_size_kb": 64, 00:23:52.708 "state": "online", 00:23:52.708 "raid_level": "raid5f", 00:23:52.708 "superblock": false, 00:23:52.708 "num_base_bdevs": 4, 00:23:52.708 "num_base_bdevs_discovered": 4, 00:23:52.708 "num_base_bdevs_operational": 4, 00:23:52.708 "base_bdevs_list": [ 00:23:52.708 { 00:23:52.708 "name": "spare", 00:23:52.708 "uuid": "0f5b29d6-9976-5261-8514-b43502492c47", 00:23:52.708 "is_configured": true, 00:23:52.709 "data_offset": 0, 00:23:52.709 "data_size": 65536 00:23:52.709 }, 00:23:52.709 { 00:23:52.709 "name": "BaseBdev2", 00:23:52.709 "uuid": "c6eb8931-88d9-50da-8409-bef8b988afe5", 00:23:52.709 "is_configured": true, 00:23:52.709 "data_offset": 0, 00:23:52.709 "data_size": 65536 00:23:52.709 }, 00:23:52.709 { 00:23:52.709 "name": "BaseBdev3", 00:23:52.709 "uuid": "0aa60a21-dd21-552a-b2e7-25b0186ad82d", 00:23:52.709 "is_configured": true, 00:23:52.709 "data_offset": 0, 00:23:52.709 "data_size": 65536 00:23:52.709 }, 00:23:52.709 { 00:23:52.709 "name": "BaseBdev4", 00:23:52.709 "uuid": "5ff6490d-b098-5c66-8252-ccb0888e5ef9", 00:23:52.709 "is_configured": true, 00:23:52.709 "data_offset": 0, 00:23:52.709 "data_size": 65536 00:23:52.709 } 00:23:52.709 ] 00:23:52.709 }' 00:23:52.709 09:16:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.709 09:16:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:52.967 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.967 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.227 [2024-11-06 09:16:52.008342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.227 [2024-11-06 09:16:52.008399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.227 [2024-11-06 09:16:52.008524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.227 [2024-11-06 09:16:52.008654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.227 [2024-11-06 09:16:52.008675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:53.227 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:53.486 /dev/nbd0 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.486 1+0 records in 00:23:53.486 1+0 records out 00:23:53.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499671 s, 8.2 MB/s 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:53.486 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:53.744 /dev/nbd1 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:53.744 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.745 1+0 records in 00:23:53.745 1+0 records out 00:23:53.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469706 s, 8.7 MB/s 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:53.745 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.004 09:16:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.263 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84308 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84308 ']' 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84308 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84308 00:23:54.522 killing process with pid 84308 00:23:54.522 Received shutdown signal, test time was about 60.000000 seconds 00:23:54.522 00:23:54.522 Latency(us) 00:23:54.522 [2024-11-06T09:16:53.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.522 [2024-11-06T09:16:53.562Z] =================================================================================================================== 00:23:54.522 [2024-11-06T09:16:53.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84308' 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84308 00:23:54.522 [2024-11-06 09:16:53.464906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.522 09:16:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84308 00:23:55.089 [2024-11-06 09:16:54.022192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:56.464 ************************************ 00:23:56.464 END TEST raid5f_rebuild_test 00:23:56.464 ************************************ 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:56.464 00:23:56.464 real 0m20.714s 00:23:56.464 user 0m24.743s 00:23:56.464 sys 0m2.597s 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 09:16:55 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:23:56.464 09:16:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:56.464 09:16:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:56.464 09:16:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 ************************************ 00:23:56.464 START TEST raid5f_rebuild_test_sb 00:23:56.464 ************************************ 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:56.464 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84835 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84835 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 84835 ']' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.465 09:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.465 [2024-11-06 09:16:55.390928] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:23:56.465 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:56.465 Zero copy mechanism will not be used. 00:23:56.465 [2024-11-06 09:16:55.391330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84835 ] 00:23:56.723 [2024-11-06 09:16:55.573926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.723 [2024-11-06 09:16:55.722566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.981 [2024-11-06 09:16:55.948405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.981 [2024-11-06 09:16:55.948717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 BaseBdev1_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 [2024-11-06 09:16:56.357315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:57.547 [2024-11-06 09:16:56.357417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.547 [2024-11-06 09:16:56.357450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:57.547 [2024-11-06 09:16:56.357468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.547 [2024-11-06 09:16:56.360181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.547 [2024-11-06 09:16:56.360424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:57.547 BaseBdev1 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 BaseBdev2_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 [2024-11-06 09:16:56.415556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:57.547 [2024-11-06 09:16:56.415846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.547 [2024-11-06 09:16:56.415884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:57.547 [2024-11-06 09:16:56.415904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.547 [2024-11-06 09:16:56.418638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.547 [2024-11-06 09:16:56.418697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:57.547 BaseBdev2 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 BaseBdev3_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 [2024-11-06 09:16:56.481611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:57.547 [2024-11-06 09:16:56.481926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.547 [2024-11-06 09:16:56.482057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:57.547 [2024-11-06 09:16:56.482155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.547 [2024-11-06 09:16:56.484905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.547 [2024-11-06 09:16:56.485113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:57.547 BaseBdev3 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 BaseBdev4_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.547 [2024-11-06 09:16:56.537864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:57.547 [2024-11-06 09:16:56.538129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.547 [2024-11-06 09:16:56.538165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:57.547 [2024-11-06 09:16:56.538203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.547 [2024-11-06 09:16:56.541051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.547 [2024-11-06 09:16:56.541227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:57.547 BaseBdev4 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.547 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 spare_malloc 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 spare_delay 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 [2024-11-06 09:16:56.606461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:57.807 [2024-11-06 09:16:56.606742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.807 [2024-11-06 09:16:56.606780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:57.807 [2024-11-06 09:16:56.606797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.807 [2024-11-06 09:16:56.609561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.807 [2024-11-06 09:16:56.609620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:57.807 spare 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 [2024-11-06 09:16:56.618596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.807 [2024-11-06 09:16:56.621173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:57.807 [2024-11-06 09:16:56.621469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:57.807 [2024-11-06 09:16:56.621646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:57.807 [2024-11-06 09:16:56.621996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:57.807 [2024-11-06 09:16:56.622121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:57.807 [2024-11-06 09:16:56.622524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:57.807 [2024-11-06 09:16:56.631707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:57.807 [2024-11-06 09:16:56.631924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:57.807 [2024-11-06 09:16:56.632370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.807 "name": "raid_bdev1", 00:23:57.807 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:23:57.807 "strip_size_kb": 64, 00:23:57.807 "state": "online", 00:23:57.807 "raid_level": "raid5f", 00:23:57.807 "superblock": true, 00:23:57.807 "num_base_bdevs": 4, 00:23:57.807 "num_base_bdevs_discovered": 4, 00:23:57.807 "num_base_bdevs_operational": 4, 00:23:57.807 "base_bdevs_list": [ 00:23:57.807 { 00:23:57.807 "name": "BaseBdev1", 00:23:57.807 "uuid": "1b1049c6-b097-5a22-a7ed-a7367bf7e795", 00:23:57.807 "is_configured": true, 00:23:57.807 "data_offset": 2048, 00:23:57.807 "data_size": 63488 00:23:57.807 }, 00:23:57.807 { 00:23:57.807 "name": "BaseBdev2", 00:23:57.807 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:23:57.807 "is_configured": true, 00:23:57.807 "data_offset": 2048, 00:23:57.807 "data_size": 63488 00:23:57.807 }, 00:23:57.807 { 00:23:57.807 "name": "BaseBdev3", 00:23:57.807 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:23:57.807 "is_configured": true, 00:23:57.807 "data_offset": 2048, 00:23:57.807 "data_size": 63488 00:23:57.807 }, 00:23:57.807 { 00:23:57.807 "name": "BaseBdev4", 00:23:57.807 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:23:57.807 "is_configured": true, 00:23:57.807 "data_offset": 2048, 00:23:57.807 "data_size": 63488 00:23:57.807 } 00:23:57.807 ] 00:23:57.807 }' 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.807 09:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.066 [2024-11-06 09:16:57.057079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.066 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.325 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:58.584 [2024-11-06 09:16:57.392527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:58.584 /dev/nbd0 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:58.584 1+0 records in 00:23:58.584 1+0 records out 00:23:58.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592477 s, 6.9 MB/s 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:58.584 09:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:59.151 496+0 records in 00:23:59.151 496+0 records out 00:23:59.151 97517568 bytes (98 MB, 93 MiB) copied, 0.547645 s, 178 MB/s 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.151 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.410 [2024-11-06 09:16:58.355652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.410 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.411 [2024-11-06 09:16:58.382243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.411 "name": "raid_bdev1", 00:23:59.411 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:23:59.411 "strip_size_kb": 64, 00:23:59.411 "state": "online", 00:23:59.411 "raid_level": "raid5f", 00:23:59.411 "superblock": true, 00:23:59.411 "num_base_bdevs": 4, 00:23:59.411 "num_base_bdevs_discovered": 3, 00:23:59.411 "num_base_bdevs_operational": 3, 00:23:59.411 "base_bdevs_list": [ 00:23:59.411 { 00:23:59.411 "name": null, 00:23:59.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.411 "is_configured": false, 00:23:59.411 "data_offset": 0, 00:23:59.411 "data_size": 63488 00:23:59.411 }, 00:23:59.411 { 00:23:59.411 "name": "BaseBdev2", 00:23:59.411 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:23:59.411 "is_configured": true, 00:23:59.411 "data_offset": 2048, 00:23:59.411 "data_size": 63488 00:23:59.411 }, 00:23:59.411 { 00:23:59.411 "name": "BaseBdev3", 00:23:59.411 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:23:59.411 "is_configured": true, 00:23:59.411 "data_offset": 2048, 00:23:59.411 "data_size": 63488 00:23:59.411 }, 00:23:59.411 { 00:23:59.411 "name": "BaseBdev4", 00:23:59.411 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:23:59.411 "is_configured": true, 00:23:59.411 "data_offset": 2048, 00:23:59.411 "data_size": 63488 00:23:59.411 } 00:23:59.411 ] 00:23:59.411 }' 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.411 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.979 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:59.979 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.979 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.979 [2024-11-06 09:16:58.813678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.979 [2024-11-06 09:16:58.837854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:23:59.979 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.979 09:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:59.979 [2024-11-06 09:16:58.853188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.915 "name": "raid_bdev1", 00:24:00.915 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:00.915 "strip_size_kb": 64, 00:24:00.915 "state": "online", 00:24:00.915 "raid_level": "raid5f", 00:24:00.915 "superblock": true, 00:24:00.915 "num_base_bdevs": 4, 00:24:00.915 "num_base_bdevs_discovered": 4, 00:24:00.915 "num_base_bdevs_operational": 4, 00:24:00.915 "process": { 00:24:00.915 "type": "rebuild", 00:24:00.915 "target": "spare", 00:24:00.915 "progress": { 00:24:00.915 "blocks": 17280, 00:24:00.915 "percent": 9 00:24:00.915 } 00:24:00.915 }, 00:24:00.915 "base_bdevs_list": [ 00:24:00.915 { 00:24:00.915 "name": "spare", 00:24:00.915 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:00.915 "is_configured": true, 00:24:00.915 "data_offset": 2048, 00:24:00.915 "data_size": 63488 00:24:00.915 }, 00:24:00.915 { 00:24:00.915 "name": "BaseBdev2", 00:24:00.915 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:00.915 "is_configured": true, 00:24:00.915 "data_offset": 2048, 00:24:00.915 "data_size": 63488 00:24:00.915 }, 00:24:00.915 { 00:24:00.915 "name": "BaseBdev3", 00:24:00.915 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:00.915 "is_configured": true, 00:24:00.915 "data_offset": 2048, 00:24:00.915 "data_size": 63488 00:24:00.915 }, 00:24:00.915 { 00:24:00.915 "name": "BaseBdev4", 00:24:00.915 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:00.915 "is_configured": true, 00:24:00.915 "data_offset": 2048, 00:24:00.915 "data_size": 63488 00:24:00.915 } 00:24:00.915 ] 00:24:00.915 }' 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.915 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.173 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.173 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:01.173 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.173 09:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.173 [2024-11-06 09:16:59.989813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:01.173 [2024-11-06 09:17:00.063863] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:01.173 [2024-11-06 09:17:00.064035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.173 [2024-11-06 09:17:00.064075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:01.173 [2024-11-06 09:17:00.064100] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.173 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.173 "name": "raid_bdev1", 00:24:01.173 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:01.173 "strip_size_kb": 64, 00:24:01.173 "state": "online", 00:24:01.173 "raid_level": "raid5f", 00:24:01.173 "superblock": true, 00:24:01.173 "num_base_bdevs": 4, 00:24:01.173 "num_base_bdevs_discovered": 3, 00:24:01.173 "num_base_bdevs_operational": 3, 00:24:01.173 "base_bdevs_list": [ 00:24:01.173 { 00:24:01.173 "name": null, 00:24:01.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.173 "is_configured": false, 00:24:01.173 "data_offset": 0, 00:24:01.173 "data_size": 63488 00:24:01.173 }, 00:24:01.173 { 00:24:01.173 "name": "BaseBdev2", 00:24:01.173 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:01.174 "is_configured": true, 00:24:01.174 "data_offset": 2048, 00:24:01.174 "data_size": 63488 00:24:01.174 }, 00:24:01.174 { 00:24:01.174 "name": "BaseBdev3", 00:24:01.174 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:01.174 "is_configured": true, 00:24:01.174 "data_offset": 2048, 00:24:01.174 "data_size": 63488 00:24:01.174 }, 00:24:01.174 { 00:24:01.174 "name": "BaseBdev4", 00:24:01.174 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:01.174 "is_configured": true, 00:24:01.174 "data_offset": 2048, 00:24:01.174 "data_size": 63488 00:24:01.174 } 00:24:01.174 ] 00:24:01.174 }' 00:24:01.174 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.174 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.742 "name": "raid_bdev1", 00:24:01.742 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:01.742 "strip_size_kb": 64, 00:24:01.742 "state": "online", 00:24:01.742 "raid_level": "raid5f", 00:24:01.742 "superblock": true, 00:24:01.742 "num_base_bdevs": 4, 00:24:01.742 "num_base_bdevs_discovered": 3, 00:24:01.742 "num_base_bdevs_operational": 3, 00:24:01.742 "base_bdevs_list": [ 00:24:01.742 { 00:24:01.742 "name": null, 00:24:01.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.742 "is_configured": false, 00:24:01.742 "data_offset": 0, 00:24:01.742 "data_size": 63488 00:24:01.742 }, 00:24:01.742 { 00:24:01.742 "name": "BaseBdev2", 00:24:01.742 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:01.742 "is_configured": true, 00:24:01.742 "data_offset": 2048, 00:24:01.742 "data_size": 63488 00:24:01.742 }, 00:24:01.742 { 00:24:01.742 "name": "BaseBdev3", 00:24:01.742 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:01.742 "is_configured": true, 00:24:01.742 "data_offset": 2048, 00:24:01.742 "data_size": 63488 00:24:01.742 }, 00:24:01.742 { 00:24:01.742 "name": "BaseBdev4", 00:24:01.742 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:01.742 "is_configured": true, 00:24:01.742 "data_offset": 2048, 00:24:01.742 "data_size": 63488 00:24:01.742 } 00:24:01.742 ] 00:24:01.742 }' 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.742 [2024-11-06 09:17:00.704253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.742 [2024-11-06 09:17:00.722012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.742 09:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:01.742 [2024-11-06 09:17:00.733219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:03.123 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.123 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.123 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.123 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.123 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.124 "name": "raid_bdev1", 00:24:03.124 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:03.124 "strip_size_kb": 64, 00:24:03.124 "state": "online", 00:24:03.124 "raid_level": "raid5f", 00:24:03.124 "superblock": true, 00:24:03.124 "num_base_bdevs": 4, 00:24:03.124 "num_base_bdevs_discovered": 4, 00:24:03.124 "num_base_bdevs_operational": 4, 00:24:03.124 "process": { 00:24:03.124 "type": "rebuild", 00:24:03.124 "target": "spare", 00:24:03.124 "progress": { 00:24:03.124 "blocks": 17280, 00:24:03.124 "percent": 9 00:24:03.124 } 00:24:03.124 }, 00:24:03.124 "base_bdevs_list": [ 00:24:03.124 { 00:24:03.124 "name": "spare", 00:24:03.124 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 }, 00:24:03.124 { 00:24:03.124 "name": "BaseBdev2", 00:24:03.124 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 }, 00:24:03.124 { 00:24:03.124 "name": "BaseBdev3", 00:24:03.124 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 }, 00:24:03.124 { 00:24:03.124 "name": "BaseBdev4", 00:24:03.124 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 } 00:24:03.124 ] 00:24:03.124 }' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:03.124 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=636 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.124 "name": "raid_bdev1", 00:24:03.124 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:03.124 "strip_size_kb": 64, 00:24:03.124 "state": "online", 00:24:03.124 "raid_level": "raid5f", 00:24:03.124 "superblock": true, 00:24:03.124 "num_base_bdevs": 4, 00:24:03.124 "num_base_bdevs_discovered": 4, 00:24:03.124 "num_base_bdevs_operational": 4, 00:24:03.124 "process": { 00:24:03.124 "type": "rebuild", 00:24:03.124 "target": "spare", 00:24:03.124 "progress": { 00:24:03.124 "blocks": 21120, 00:24:03.124 "percent": 11 00:24:03.124 } 00:24:03.124 }, 00:24:03.124 "base_bdevs_list": [ 00:24:03.124 { 00:24:03.124 "name": "spare", 00:24:03.124 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 }, 00:24:03.124 { 00:24:03.124 "name": "BaseBdev2", 00:24:03.124 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 }, 00:24:03.124 { 00:24:03.124 "name": "BaseBdev3", 00:24:03.124 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 }, 00:24:03.124 { 00:24:03.124 "name": "BaseBdev4", 00:24:03.124 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:03.124 "is_configured": true, 00:24:03.124 "data_offset": 2048, 00:24:03.124 "data_size": 63488 00:24:03.124 } 00:24:03.124 ] 00:24:03.124 }' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.124 09:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.124 09:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.124 09:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.061 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.061 "name": "raid_bdev1", 00:24:04.061 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:04.061 "strip_size_kb": 64, 00:24:04.061 "state": "online", 00:24:04.061 "raid_level": "raid5f", 00:24:04.061 "superblock": true, 00:24:04.061 "num_base_bdevs": 4, 00:24:04.061 "num_base_bdevs_discovered": 4, 00:24:04.061 "num_base_bdevs_operational": 4, 00:24:04.061 "process": { 00:24:04.061 "type": "rebuild", 00:24:04.061 "target": "spare", 00:24:04.061 "progress": { 00:24:04.061 "blocks": 42240, 00:24:04.061 "percent": 22 00:24:04.061 } 00:24:04.061 }, 00:24:04.061 "base_bdevs_list": [ 00:24:04.061 { 00:24:04.061 "name": "spare", 00:24:04.061 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:04.061 "is_configured": true, 00:24:04.061 "data_offset": 2048, 00:24:04.061 "data_size": 63488 00:24:04.061 }, 00:24:04.061 { 00:24:04.061 "name": "BaseBdev2", 00:24:04.061 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:04.061 "is_configured": true, 00:24:04.061 "data_offset": 2048, 00:24:04.061 "data_size": 63488 00:24:04.061 }, 00:24:04.061 { 00:24:04.061 "name": "BaseBdev3", 00:24:04.061 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:04.061 "is_configured": true, 00:24:04.061 "data_offset": 2048, 00:24:04.062 "data_size": 63488 00:24:04.062 }, 00:24:04.062 { 00:24:04.062 "name": "BaseBdev4", 00:24:04.062 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:04.062 "is_configured": true, 00:24:04.062 "data_offset": 2048, 00:24:04.062 "data_size": 63488 00:24:04.062 } 00:24:04.062 ] 00:24:04.062 }' 00:24:04.062 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.379 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.379 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.379 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.380 09:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.316 "name": "raid_bdev1", 00:24:05.316 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:05.316 "strip_size_kb": 64, 00:24:05.316 "state": "online", 00:24:05.316 "raid_level": "raid5f", 00:24:05.316 "superblock": true, 00:24:05.316 "num_base_bdevs": 4, 00:24:05.316 "num_base_bdevs_discovered": 4, 00:24:05.316 "num_base_bdevs_operational": 4, 00:24:05.316 "process": { 00:24:05.316 "type": "rebuild", 00:24:05.316 "target": "spare", 00:24:05.316 "progress": { 00:24:05.316 "blocks": 65280, 00:24:05.316 "percent": 34 00:24:05.316 } 00:24:05.316 }, 00:24:05.316 "base_bdevs_list": [ 00:24:05.316 { 00:24:05.316 "name": "spare", 00:24:05.316 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:05.316 "is_configured": true, 00:24:05.316 "data_offset": 2048, 00:24:05.316 "data_size": 63488 00:24:05.316 }, 00:24:05.316 { 00:24:05.316 "name": "BaseBdev2", 00:24:05.316 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:05.316 "is_configured": true, 00:24:05.316 "data_offset": 2048, 00:24:05.316 "data_size": 63488 00:24:05.316 }, 00:24:05.316 { 00:24:05.316 "name": "BaseBdev3", 00:24:05.316 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:05.316 "is_configured": true, 00:24:05.316 "data_offset": 2048, 00:24:05.316 "data_size": 63488 00:24:05.316 }, 00:24:05.316 { 00:24:05.316 "name": "BaseBdev4", 00:24:05.316 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:05.316 "is_configured": true, 00:24:05.316 "data_offset": 2048, 00:24:05.316 "data_size": 63488 00:24:05.316 } 00:24:05.316 ] 00:24:05.316 }' 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.316 09:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.693 "name": "raid_bdev1", 00:24:06.693 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:06.693 "strip_size_kb": 64, 00:24:06.693 "state": "online", 00:24:06.693 "raid_level": "raid5f", 00:24:06.693 "superblock": true, 00:24:06.693 "num_base_bdevs": 4, 00:24:06.693 "num_base_bdevs_discovered": 4, 00:24:06.693 "num_base_bdevs_operational": 4, 00:24:06.693 "process": { 00:24:06.693 "type": "rebuild", 00:24:06.693 "target": "spare", 00:24:06.693 "progress": { 00:24:06.693 "blocks": 86400, 00:24:06.693 "percent": 45 00:24:06.693 } 00:24:06.693 }, 00:24:06.693 "base_bdevs_list": [ 00:24:06.693 { 00:24:06.693 "name": "spare", 00:24:06.693 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:06.693 "is_configured": true, 00:24:06.693 "data_offset": 2048, 00:24:06.693 "data_size": 63488 00:24:06.693 }, 00:24:06.693 { 00:24:06.693 "name": "BaseBdev2", 00:24:06.693 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:06.693 "is_configured": true, 00:24:06.693 "data_offset": 2048, 00:24:06.693 "data_size": 63488 00:24:06.693 }, 00:24:06.693 { 00:24:06.693 "name": "BaseBdev3", 00:24:06.693 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:06.693 "is_configured": true, 00:24:06.693 "data_offset": 2048, 00:24:06.693 "data_size": 63488 00:24:06.693 }, 00:24:06.693 { 00:24:06.693 "name": "BaseBdev4", 00:24:06.693 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:06.693 "is_configured": true, 00:24:06.693 "data_offset": 2048, 00:24:06.693 "data_size": 63488 00:24:06.693 } 00:24:06.693 ] 00:24:06.693 }' 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.693 09:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.628 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.628 "name": "raid_bdev1", 00:24:07.628 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:07.628 "strip_size_kb": 64, 00:24:07.628 "state": "online", 00:24:07.628 "raid_level": "raid5f", 00:24:07.628 "superblock": true, 00:24:07.628 "num_base_bdevs": 4, 00:24:07.628 "num_base_bdevs_discovered": 4, 00:24:07.628 "num_base_bdevs_operational": 4, 00:24:07.628 "process": { 00:24:07.628 "type": "rebuild", 00:24:07.628 "target": "spare", 00:24:07.628 "progress": { 00:24:07.628 "blocks": 107520, 00:24:07.628 "percent": 56 00:24:07.628 } 00:24:07.628 }, 00:24:07.628 "base_bdevs_list": [ 00:24:07.628 { 00:24:07.628 "name": "spare", 00:24:07.628 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:07.628 "is_configured": true, 00:24:07.628 "data_offset": 2048, 00:24:07.628 "data_size": 63488 00:24:07.628 }, 00:24:07.628 { 00:24:07.628 "name": "BaseBdev2", 00:24:07.628 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:07.628 "is_configured": true, 00:24:07.628 "data_offset": 2048, 00:24:07.628 "data_size": 63488 00:24:07.628 }, 00:24:07.628 { 00:24:07.628 "name": "BaseBdev3", 00:24:07.629 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:07.629 "is_configured": true, 00:24:07.629 "data_offset": 2048, 00:24:07.629 "data_size": 63488 00:24:07.629 }, 00:24:07.629 { 00:24:07.629 "name": "BaseBdev4", 00:24:07.629 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:07.629 "is_configured": true, 00:24:07.629 "data_offset": 2048, 00:24:07.629 "data_size": 63488 00:24:07.629 } 00:24:07.629 ] 00:24:07.629 }' 00:24:07.629 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.629 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.629 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.629 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.629 09:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.598 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.856 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.856 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.856 "name": "raid_bdev1", 00:24:08.856 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:08.856 "strip_size_kb": 64, 00:24:08.856 "state": "online", 00:24:08.856 "raid_level": "raid5f", 00:24:08.856 "superblock": true, 00:24:08.856 "num_base_bdevs": 4, 00:24:08.856 "num_base_bdevs_discovered": 4, 00:24:08.856 "num_base_bdevs_operational": 4, 00:24:08.856 "process": { 00:24:08.856 "type": "rebuild", 00:24:08.856 "target": "spare", 00:24:08.856 "progress": { 00:24:08.856 "blocks": 130560, 00:24:08.856 "percent": 68 00:24:08.856 } 00:24:08.856 }, 00:24:08.856 "base_bdevs_list": [ 00:24:08.856 { 00:24:08.856 "name": "spare", 00:24:08.856 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:08.856 "is_configured": true, 00:24:08.856 "data_offset": 2048, 00:24:08.856 "data_size": 63488 00:24:08.856 }, 00:24:08.856 { 00:24:08.856 "name": "BaseBdev2", 00:24:08.856 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:08.856 "is_configured": true, 00:24:08.856 "data_offset": 2048, 00:24:08.856 "data_size": 63488 00:24:08.856 }, 00:24:08.856 { 00:24:08.856 "name": "BaseBdev3", 00:24:08.856 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:08.856 "is_configured": true, 00:24:08.856 "data_offset": 2048, 00:24:08.856 "data_size": 63488 00:24:08.856 }, 00:24:08.856 { 00:24:08.856 "name": "BaseBdev4", 00:24:08.856 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:08.856 "is_configured": true, 00:24:08.856 "data_offset": 2048, 00:24:08.856 "data_size": 63488 00:24:08.856 } 00:24:08.856 ] 00:24:08.856 }' 00:24:08.856 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.856 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.856 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.856 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.857 09:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.791 "name": "raid_bdev1", 00:24:09.791 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:09.791 "strip_size_kb": 64, 00:24:09.791 "state": "online", 00:24:09.791 "raid_level": "raid5f", 00:24:09.791 "superblock": true, 00:24:09.791 "num_base_bdevs": 4, 00:24:09.791 "num_base_bdevs_discovered": 4, 00:24:09.791 "num_base_bdevs_operational": 4, 00:24:09.791 "process": { 00:24:09.791 "type": "rebuild", 00:24:09.791 "target": "spare", 00:24:09.791 "progress": { 00:24:09.791 "blocks": 151680, 00:24:09.791 "percent": 79 00:24:09.791 } 00:24:09.791 }, 00:24:09.791 "base_bdevs_list": [ 00:24:09.791 { 00:24:09.791 "name": "spare", 00:24:09.791 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:09.791 "is_configured": true, 00:24:09.791 "data_offset": 2048, 00:24:09.791 "data_size": 63488 00:24:09.791 }, 00:24:09.791 { 00:24:09.791 "name": "BaseBdev2", 00:24:09.791 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:09.791 "is_configured": true, 00:24:09.791 "data_offset": 2048, 00:24:09.791 "data_size": 63488 00:24:09.791 }, 00:24:09.791 { 00:24:09.791 "name": "BaseBdev3", 00:24:09.791 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:09.791 "is_configured": true, 00:24:09.791 "data_offset": 2048, 00:24:09.791 "data_size": 63488 00:24:09.791 }, 00:24:09.791 { 00:24:09.791 "name": "BaseBdev4", 00:24:09.791 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:09.791 "is_configured": true, 00:24:09.791 "data_offset": 2048, 00:24:09.791 "data_size": 63488 00:24:09.791 } 00:24:09.791 ] 00:24:09.791 }' 00:24:09.791 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.049 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.049 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.049 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.049 09:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.983 "name": "raid_bdev1", 00:24:10.983 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:10.983 "strip_size_kb": 64, 00:24:10.983 "state": "online", 00:24:10.983 "raid_level": "raid5f", 00:24:10.983 "superblock": true, 00:24:10.983 "num_base_bdevs": 4, 00:24:10.983 "num_base_bdevs_discovered": 4, 00:24:10.983 "num_base_bdevs_operational": 4, 00:24:10.983 "process": { 00:24:10.983 "type": "rebuild", 00:24:10.983 "target": "spare", 00:24:10.983 "progress": { 00:24:10.983 "blocks": 174720, 00:24:10.983 "percent": 91 00:24:10.983 } 00:24:10.983 }, 00:24:10.983 "base_bdevs_list": [ 00:24:10.983 { 00:24:10.983 "name": "spare", 00:24:10.983 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:10.983 "is_configured": true, 00:24:10.983 "data_offset": 2048, 00:24:10.983 "data_size": 63488 00:24:10.983 }, 00:24:10.983 { 00:24:10.983 "name": "BaseBdev2", 00:24:10.983 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:10.983 "is_configured": true, 00:24:10.983 "data_offset": 2048, 00:24:10.983 "data_size": 63488 00:24:10.983 }, 00:24:10.983 { 00:24:10.983 "name": "BaseBdev3", 00:24:10.983 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:10.983 "is_configured": true, 00:24:10.983 "data_offset": 2048, 00:24:10.983 "data_size": 63488 00:24:10.983 }, 00:24:10.983 { 00:24:10.983 "name": "BaseBdev4", 00:24:10.983 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:10.983 "is_configured": true, 00:24:10.983 "data_offset": 2048, 00:24:10.983 "data_size": 63488 00:24:10.983 } 00:24:10.983 ] 00:24:10.983 }' 00:24:10.983 09:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.983 09:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.983 09:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.241 09:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.241 09:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:11.806 [2024-11-06 09:17:10.812564] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:11.806 [2024-11-06 09:17:10.812668] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:11.806 [2024-11-06 09:17:10.812840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.064 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.064 "name": "raid_bdev1", 00:24:12.065 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:12.065 "strip_size_kb": 64, 00:24:12.065 "state": "online", 00:24:12.065 "raid_level": "raid5f", 00:24:12.065 "superblock": true, 00:24:12.065 "num_base_bdevs": 4, 00:24:12.065 "num_base_bdevs_discovered": 4, 00:24:12.065 "num_base_bdevs_operational": 4, 00:24:12.065 "base_bdevs_list": [ 00:24:12.065 { 00:24:12.065 "name": "spare", 00:24:12.065 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:12.065 "is_configured": true, 00:24:12.065 "data_offset": 2048, 00:24:12.065 "data_size": 63488 00:24:12.065 }, 00:24:12.065 { 00:24:12.065 "name": "BaseBdev2", 00:24:12.065 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:12.065 "is_configured": true, 00:24:12.065 "data_offset": 2048, 00:24:12.065 "data_size": 63488 00:24:12.065 }, 00:24:12.065 { 00:24:12.065 "name": "BaseBdev3", 00:24:12.065 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:12.065 "is_configured": true, 00:24:12.065 "data_offset": 2048, 00:24:12.065 "data_size": 63488 00:24:12.065 }, 00:24:12.065 { 00:24:12.065 "name": "BaseBdev4", 00:24:12.065 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:12.065 "is_configured": true, 00:24:12.065 "data_offset": 2048, 00:24:12.065 "data_size": 63488 00:24:12.065 } 00:24:12.065 ] 00:24:12.065 }' 00:24:12.065 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.323 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.324 "name": "raid_bdev1", 00:24:12.324 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:12.324 "strip_size_kb": 64, 00:24:12.324 "state": "online", 00:24:12.324 "raid_level": "raid5f", 00:24:12.324 "superblock": true, 00:24:12.324 "num_base_bdevs": 4, 00:24:12.324 "num_base_bdevs_discovered": 4, 00:24:12.324 "num_base_bdevs_operational": 4, 00:24:12.324 "base_bdevs_list": [ 00:24:12.324 { 00:24:12.324 "name": "spare", 00:24:12.324 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:12.324 "is_configured": true, 00:24:12.324 "data_offset": 2048, 00:24:12.324 "data_size": 63488 00:24:12.324 }, 00:24:12.324 { 00:24:12.324 "name": "BaseBdev2", 00:24:12.324 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:12.324 "is_configured": true, 00:24:12.324 "data_offset": 2048, 00:24:12.324 "data_size": 63488 00:24:12.324 }, 00:24:12.324 { 00:24:12.324 "name": "BaseBdev3", 00:24:12.324 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:12.324 "is_configured": true, 00:24:12.324 "data_offset": 2048, 00:24:12.324 "data_size": 63488 00:24:12.324 }, 00:24:12.324 { 00:24:12.324 "name": "BaseBdev4", 00:24:12.324 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:12.324 "is_configured": true, 00:24:12.324 "data_offset": 2048, 00:24:12.324 "data_size": 63488 00:24:12.324 } 00:24:12.324 ] 00:24:12.324 }' 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.324 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.582 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.582 "name": "raid_bdev1", 00:24:12.582 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:12.582 "strip_size_kb": 64, 00:24:12.582 "state": "online", 00:24:12.582 "raid_level": "raid5f", 00:24:12.582 "superblock": true, 00:24:12.582 "num_base_bdevs": 4, 00:24:12.582 "num_base_bdevs_discovered": 4, 00:24:12.582 "num_base_bdevs_operational": 4, 00:24:12.582 "base_bdevs_list": [ 00:24:12.582 { 00:24:12.582 "name": "spare", 00:24:12.582 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:12.582 "is_configured": true, 00:24:12.582 "data_offset": 2048, 00:24:12.582 "data_size": 63488 00:24:12.582 }, 00:24:12.582 { 00:24:12.582 "name": "BaseBdev2", 00:24:12.582 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:12.582 "is_configured": true, 00:24:12.582 "data_offset": 2048, 00:24:12.582 "data_size": 63488 00:24:12.582 }, 00:24:12.582 { 00:24:12.582 "name": "BaseBdev3", 00:24:12.582 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:12.582 "is_configured": true, 00:24:12.582 "data_offset": 2048, 00:24:12.582 "data_size": 63488 00:24:12.582 }, 00:24:12.582 { 00:24:12.582 "name": "BaseBdev4", 00:24:12.582 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:12.582 "is_configured": true, 00:24:12.582 "data_offset": 2048, 00:24:12.582 "data_size": 63488 00:24:12.582 } 00:24:12.582 ] 00:24:12.582 }' 00:24:12.582 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.582 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.841 [2024-11-06 09:17:11.794377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:12.841 [2024-11-06 09:17:11.794572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:12.841 [2024-11-06 09:17:11.794697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.841 [2024-11-06 09:17:11.794811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.841 [2024-11-06 09:17:11.794837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.841 09:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:13.099 /dev/nbd0 00:24:13.099 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.357 1+0 records in 00:24:13.357 1+0 records out 00:24:13.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431525 s, 9.5 MB/s 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.357 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:13.615 /dev/nbd1 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.615 1+0 records in 00:24:13.615 1+0 records out 00:24:13.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483509 s, 8.5 MB/s 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.615 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:13.873 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:14.131 09:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:14.131 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:14.131 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:14.131 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:14.131 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.131 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.131 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.389 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.389 [2024-11-06 09:17:13.192472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:14.389 [2024-11-06 09:17:13.192564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.389 [2024-11-06 09:17:13.192598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:14.389 [2024-11-06 09:17:13.192613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.389 [2024-11-06 09:17:13.195552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.389 [2024-11-06 09:17:13.195602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:14.389 [2024-11-06 09:17:13.195723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:14.389 [2024-11-06 09:17:13.195787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.389 [2024-11-06 09:17:13.195964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.389 [2024-11-06 09:17:13.196067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:14.390 [2024-11-06 09:17:13.196156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:14.390 spare 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.390 [2024-11-06 09:17:13.296123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:14.390 [2024-11-06 09:17:13.296199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:14.390 [2024-11-06 09:17:13.296657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:24:14.390 [2024-11-06 09:17:13.305822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:14.390 [2024-11-06 09:17:13.305867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:14.390 [2024-11-06 09:17:13.306161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.390 "name": "raid_bdev1", 00:24:14.390 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:14.390 "strip_size_kb": 64, 00:24:14.390 "state": "online", 00:24:14.390 "raid_level": "raid5f", 00:24:14.390 "superblock": true, 00:24:14.390 "num_base_bdevs": 4, 00:24:14.390 "num_base_bdevs_discovered": 4, 00:24:14.390 "num_base_bdevs_operational": 4, 00:24:14.390 "base_bdevs_list": [ 00:24:14.390 { 00:24:14.390 "name": "spare", 00:24:14.390 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:14.390 "is_configured": true, 00:24:14.390 "data_offset": 2048, 00:24:14.390 "data_size": 63488 00:24:14.390 }, 00:24:14.390 { 00:24:14.390 "name": "BaseBdev2", 00:24:14.390 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:14.390 "is_configured": true, 00:24:14.390 "data_offset": 2048, 00:24:14.390 "data_size": 63488 00:24:14.390 }, 00:24:14.390 { 00:24:14.390 "name": "BaseBdev3", 00:24:14.390 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:14.390 "is_configured": true, 00:24:14.390 "data_offset": 2048, 00:24:14.390 "data_size": 63488 00:24:14.390 }, 00:24:14.390 { 00:24:14.390 "name": "BaseBdev4", 00:24:14.390 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:14.390 "is_configured": true, 00:24:14.390 "data_offset": 2048, 00:24:14.390 "data_size": 63488 00:24:14.390 } 00:24:14.390 ] 00:24:14.390 }' 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.390 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.957 "name": "raid_bdev1", 00:24:14.957 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:14.957 "strip_size_kb": 64, 00:24:14.957 "state": "online", 00:24:14.957 "raid_level": "raid5f", 00:24:14.957 "superblock": true, 00:24:14.957 "num_base_bdevs": 4, 00:24:14.957 "num_base_bdevs_discovered": 4, 00:24:14.957 "num_base_bdevs_operational": 4, 00:24:14.957 "base_bdevs_list": [ 00:24:14.957 { 00:24:14.957 "name": "spare", 00:24:14.957 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:14.957 "is_configured": true, 00:24:14.957 "data_offset": 2048, 00:24:14.957 "data_size": 63488 00:24:14.957 }, 00:24:14.957 { 00:24:14.957 "name": "BaseBdev2", 00:24:14.957 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:14.957 "is_configured": true, 00:24:14.957 "data_offset": 2048, 00:24:14.957 "data_size": 63488 00:24:14.957 }, 00:24:14.957 { 00:24:14.957 "name": "BaseBdev3", 00:24:14.957 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:14.957 "is_configured": true, 00:24:14.957 "data_offset": 2048, 00:24:14.957 "data_size": 63488 00:24:14.957 }, 00:24:14.957 { 00:24:14.957 "name": "BaseBdev4", 00:24:14.957 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:14.957 "is_configured": true, 00:24:14.957 "data_offset": 2048, 00:24:14.957 "data_size": 63488 00:24:14.957 } 00:24:14.957 ] 00:24:14.957 }' 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.957 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:15.243 09:17:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.243 [2024-11-06 09:17:14.019468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.243 "name": "raid_bdev1", 00:24:15.243 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:15.243 "strip_size_kb": 64, 00:24:15.243 "state": "online", 00:24:15.243 "raid_level": "raid5f", 00:24:15.243 "superblock": true, 00:24:15.243 "num_base_bdevs": 4, 00:24:15.243 "num_base_bdevs_discovered": 3, 00:24:15.243 "num_base_bdevs_operational": 3, 00:24:15.243 "base_bdevs_list": [ 00:24:15.243 { 00:24:15.243 "name": null, 00:24:15.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.243 "is_configured": false, 00:24:15.243 "data_offset": 0, 00:24:15.243 "data_size": 63488 00:24:15.243 }, 00:24:15.243 { 00:24:15.243 "name": "BaseBdev2", 00:24:15.243 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:15.243 "is_configured": true, 00:24:15.243 "data_offset": 2048, 00:24:15.243 "data_size": 63488 00:24:15.243 }, 00:24:15.243 { 00:24:15.243 "name": "BaseBdev3", 00:24:15.243 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:15.243 "is_configured": true, 00:24:15.243 "data_offset": 2048, 00:24:15.243 "data_size": 63488 00:24:15.243 }, 00:24:15.243 { 00:24:15.243 "name": "BaseBdev4", 00:24:15.243 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:15.243 "is_configured": true, 00:24:15.243 "data_offset": 2048, 00:24:15.243 "data_size": 63488 00:24:15.243 } 00:24:15.243 ] 00:24:15.243 }' 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.243 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.503 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:15.503 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.503 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.503 [2024-11-06 09:17:14.478856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.503 [2024-11-06 09:17:14.479309] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:15.503 [2024-11-06 09:17:14.479344] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:15.503 [2024-11-06 09:17:14.479397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.503 [2024-11-06 09:17:14.496554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:24:15.503 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.503 09:17:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:15.503 [2024-11-06 09:17:14.508013] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.882 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.882 "name": "raid_bdev1", 00:24:16.882 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:16.882 "strip_size_kb": 64, 00:24:16.882 "state": "online", 00:24:16.882 "raid_level": "raid5f", 00:24:16.882 "superblock": true, 00:24:16.882 "num_base_bdevs": 4, 00:24:16.882 "num_base_bdevs_discovered": 4, 00:24:16.882 "num_base_bdevs_operational": 4, 00:24:16.882 "process": { 00:24:16.882 "type": "rebuild", 00:24:16.882 "target": "spare", 00:24:16.882 "progress": { 00:24:16.882 "blocks": 17280, 00:24:16.882 "percent": 9 00:24:16.882 } 00:24:16.882 }, 00:24:16.882 "base_bdevs_list": [ 00:24:16.882 { 00:24:16.882 "name": "spare", 00:24:16.882 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:16.882 "is_configured": true, 00:24:16.882 "data_offset": 2048, 00:24:16.882 "data_size": 63488 00:24:16.882 }, 00:24:16.882 { 00:24:16.882 "name": "BaseBdev2", 00:24:16.883 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:16.883 "is_configured": true, 00:24:16.883 "data_offset": 2048, 00:24:16.883 "data_size": 63488 00:24:16.883 }, 00:24:16.883 { 00:24:16.883 "name": "BaseBdev3", 00:24:16.883 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:16.883 "is_configured": true, 00:24:16.883 "data_offset": 2048, 00:24:16.883 "data_size": 63488 00:24:16.883 }, 00:24:16.883 { 00:24:16.883 "name": "BaseBdev4", 00:24:16.883 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:16.883 "is_configured": true, 00:24:16.883 "data_offset": 2048, 00:24:16.883 "data_size": 63488 00:24:16.883 } 00:24:16.883 ] 00:24:16.883 }' 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.883 [2024-11-06 09:17:15.668023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.883 [2024-11-06 09:17:15.718078] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:16.883 [2024-11-06 09:17:15.718178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.883 [2024-11-06 09:17:15.718217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.883 [2024-11-06 09:17:15.718234] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.883 "name": "raid_bdev1", 00:24:16.883 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:16.883 "strip_size_kb": 64, 00:24:16.883 "state": "online", 00:24:16.883 "raid_level": "raid5f", 00:24:16.883 "superblock": true, 00:24:16.883 "num_base_bdevs": 4, 00:24:16.883 "num_base_bdevs_discovered": 3, 00:24:16.883 "num_base_bdevs_operational": 3, 00:24:16.883 "base_bdevs_list": [ 00:24:16.883 { 00:24:16.883 "name": null, 00:24:16.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.883 "is_configured": false, 00:24:16.883 "data_offset": 0, 00:24:16.883 "data_size": 63488 00:24:16.883 }, 00:24:16.883 { 00:24:16.883 "name": "BaseBdev2", 00:24:16.883 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:16.883 "is_configured": true, 00:24:16.883 "data_offset": 2048, 00:24:16.883 "data_size": 63488 00:24:16.883 }, 00:24:16.883 { 00:24:16.883 "name": "BaseBdev3", 00:24:16.883 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:16.883 "is_configured": true, 00:24:16.883 "data_offset": 2048, 00:24:16.883 "data_size": 63488 00:24:16.883 }, 00:24:16.883 { 00:24:16.883 "name": "BaseBdev4", 00:24:16.883 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:16.883 "is_configured": true, 00:24:16.883 "data_offset": 2048, 00:24:16.883 "data_size": 63488 00:24:16.883 } 00:24:16.883 ] 00:24:16.883 }' 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.883 09:17:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.451 09:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:17.451 09:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.451 09:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.451 [2024-11-06 09:17:16.219093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:17.451 [2024-11-06 09:17:16.219183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.451 [2024-11-06 09:17:16.219222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:17.451 [2024-11-06 09:17:16.219240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.451 [2024-11-06 09:17:16.219867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.451 [2024-11-06 09:17:16.219905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:17.451 [2024-11-06 09:17:16.220018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:17.451 [2024-11-06 09:17:16.220039] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:17.451 [2024-11-06 09:17:16.220054] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:17.451 [2024-11-06 09:17:16.220090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.451 [2024-11-06 09:17:16.237113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:24:17.451 spare 00:24:17.451 09:17:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.451 09:17:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:17.451 [2024-11-06 09:17:16.248202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.388 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.388 "name": "raid_bdev1", 00:24:18.388 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:18.388 "strip_size_kb": 64, 00:24:18.388 "state": "online", 00:24:18.388 "raid_level": "raid5f", 00:24:18.388 "superblock": true, 00:24:18.388 "num_base_bdevs": 4, 00:24:18.388 "num_base_bdevs_discovered": 4, 00:24:18.388 "num_base_bdevs_operational": 4, 00:24:18.388 "process": { 00:24:18.388 "type": "rebuild", 00:24:18.388 "target": "spare", 00:24:18.388 "progress": { 00:24:18.388 "blocks": 17280, 00:24:18.388 "percent": 9 00:24:18.388 } 00:24:18.388 }, 00:24:18.388 "base_bdevs_list": [ 00:24:18.388 { 00:24:18.388 "name": "spare", 00:24:18.388 "uuid": "07f8d704-50e1-5b4f-9364-1fe3d150052f", 00:24:18.388 "is_configured": true, 00:24:18.388 "data_offset": 2048, 00:24:18.388 "data_size": 63488 00:24:18.388 }, 00:24:18.388 { 00:24:18.388 "name": "BaseBdev2", 00:24:18.388 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:18.388 "is_configured": true, 00:24:18.388 "data_offset": 2048, 00:24:18.388 "data_size": 63488 00:24:18.388 }, 00:24:18.388 { 00:24:18.388 "name": "BaseBdev3", 00:24:18.388 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:18.388 "is_configured": true, 00:24:18.388 "data_offset": 2048, 00:24:18.388 "data_size": 63488 00:24:18.388 }, 00:24:18.388 { 00:24:18.389 "name": "BaseBdev4", 00:24:18.389 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:18.389 "is_configured": true, 00:24:18.389 "data_offset": 2048, 00:24:18.389 "data_size": 63488 00:24:18.389 } 00:24:18.389 ] 00:24:18.389 }' 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.389 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.389 [2024-11-06 09:17:17.392118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.647 [2024-11-06 09:17:17.457852] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:18.647 [2024-11-06 09:17:17.457961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.647 [2024-11-06 09:17:17.457988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.647 [2024-11-06 09:17:17.457999] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.648 "name": "raid_bdev1", 00:24:18.648 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:18.648 "strip_size_kb": 64, 00:24:18.648 "state": "online", 00:24:18.648 "raid_level": "raid5f", 00:24:18.648 "superblock": true, 00:24:18.648 "num_base_bdevs": 4, 00:24:18.648 "num_base_bdevs_discovered": 3, 00:24:18.648 "num_base_bdevs_operational": 3, 00:24:18.648 "base_bdevs_list": [ 00:24:18.648 { 00:24:18.648 "name": null, 00:24:18.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.648 "is_configured": false, 00:24:18.648 "data_offset": 0, 00:24:18.648 "data_size": 63488 00:24:18.648 }, 00:24:18.648 { 00:24:18.648 "name": "BaseBdev2", 00:24:18.648 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:18.648 "is_configured": true, 00:24:18.648 "data_offset": 2048, 00:24:18.648 "data_size": 63488 00:24:18.648 }, 00:24:18.648 { 00:24:18.648 "name": "BaseBdev3", 00:24:18.648 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:18.648 "is_configured": true, 00:24:18.648 "data_offset": 2048, 00:24:18.648 "data_size": 63488 00:24:18.648 }, 00:24:18.648 { 00:24:18.648 "name": "BaseBdev4", 00:24:18.648 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:18.648 "is_configured": true, 00:24:18.648 "data_offset": 2048, 00:24:18.648 "data_size": 63488 00:24:18.648 } 00:24:18.648 ] 00:24:18.648 }' 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.648 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.917 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.917 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.917 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:18.917 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:18.917 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.177 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.177 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.177 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.177 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.177 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.177 09:17:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.177 "name": "raid_bdev1", 00:24:19.177 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:19.177 "strip_size_kb": 64, 00:24:19.177 "state": "online", 00:24:19.177 "raid_level": "raid5f", 00:24:19.177 "superblock": true, 00:24:19.177 "num_base_bdevs": 4, 00:24:19.177 "num_base_bdevs_discovered": 3, 00:24:19.177 "num_base_bdevs_operational": 3, 00:24:19.177 "base_bdevs_list": [ 00:24:19.177 { 00:24:19.177 "name": null, 00:24:19.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.177 "is_configured": false, 00:24:19.177 "data_offset": 0, 00:24:19.177 "data_size": 63488 00:24:19.177 }, 00:24:19.177 { 00:24:19.177 "name": "BaseBdev2", 00:24:19.177 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:19.177 "is_configured": true, 00:24:19.177 "data_offset": 2048, 00:24:19.177 "data_size": 63488 00:24:19.177 }, 00:24:19.177 { 00:24:19.177 "name": "BaseBdev3", 00:24:19.177 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:19.177 "is_configured": true, 00:24:19.177 "data_offset": 2048, 00:24:19.177 "data_size": 63488 00:24:19.177 }, 00:24:19.177 { 00:24:19.177 "name": "BaseBdev4", 00:24:19.177 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:19.177 "is_configured": true, 00:24:19.177 "data_offset": 2048, 00:24:19.177 "data_size": 63488 00:24:19.177 } 00:24:19.177 ] 00:24:19.177 }' 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.177 [2024-11-06 09:17:18.101186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:19.177 [2024-11-06 09:17:18.101259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.177 [2024-11-06 09:17:18.101308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:19.177 [2024-11-06 09:17:18.101324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.177 [2024-11-06 09:17:18.101949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.177 [2024-11-06 09:17:18.101989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:19.177 [2024-11-06 09:17:18.102094] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:19.177 [2024-11-06 09:17:18.102112] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:19.177 [2024-11-06 09:17:18.102130] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:19.177 [2024-11-06 09:17:18.102146] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:19.177 BaseBdev1 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.177 09:17:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.116 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.374 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.374 "name": "raid_bdev1", 00:24:20.374 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:20.374 "strip_size_kb": 64, 00:24:20.374 "state": "online", 00:24:20.374 "raid_level": "raid5f", 00:24:20.374 "superblock": true, 00:24:20.374 "num_base_bdevs": 4, 00:24:20.374 "num_base_bdevs_discovered": 3, 00:24:20.374 "num_base_bdevs_operational": 3, 00:24:20.374 "base_bdevs_list": [ 00:24:20.374 { 00:24:20.374 "name": null, 00:24:20.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.374 "is_configured": false, 00:24:20.374 "data_offset": 0, 00:24:20.374 "data_size": 63488 00:24:20.374 }, 00:24:20.374 { 00:24:20.374 "name": "BaseBdev2", 00:24:20.374 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:20.374 "is_configured": true, 00:24:20.374 "data_offset": 2048, 00:24:20.374 "data_size": 63488 00:24:20.374 }, 00:24:20.374 { 00:24:20.374 "name": "BaseBdev3", 00:24:20.374 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:20.374 "is_configured": true, 00:24:20.374 "data_offset": 2048, 00:24:20.374 "data_size": 63488 00:24:20.374 }, 00:24:20.374 { 00:24:20.374 "name": "BaseBdev4", 00:24:20.374 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:20.374 "is_configured": true, 00:24:20.374 "data_offset": 2048, 00:24:20.374 "data_size": 63488 00:24:20.374 } 00:24:20.374 ] 00:24:20.374 }' 00:24:20.374 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.374 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.632 "name": "raid_bdev1", 00:24:20.632 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:20.632 "strip_size_kb": 64, 00:24:20.632 "state": "online", 00:24:20.632 "raid_level": "raid5f", 00:24:20.632 "superblock": true, 00:24:20.632 "num_base_bdevs": 4, 00:24:20.632 "num_base_bdevs_discovered": 3, 00:24:20.632 "num_base_bdevs_operational": 3, 00:24:20.632 "base_bdevs_list": [ 00:24:20.632 { 00:24:20.632 "name": null, 00:24:20.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.632 "is_configured": false, 00:24:20.632 "data_offset": 0, 00:24:20.632 "data_size": 63488 00:24:20.632 }, 00:24:20.632 { 00:24:20.632 "name": "BaseBdev2", 00:24:20.632 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:20.632 "is_configured": true, 00:24:20.632 "data_offset": 2048, 00:24:20.632 "data_size": 63488 00:24:20.632 }, 00:24:20.632 { 00:24:20.632 "name": "BaseBdev3", 00:24:20.632 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:20.632 "is_configured": true, 00:24:20.632 "data_offset": 2048, 00:24:20.632 "data_size": 63488 00:24:20.632 }, 00:24:20.632 { 00:24:20.632 "name": "BaseBdev4", 00:24:20.632 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:20.632 "is_configured": true, 00:24:20.632 "data_offset": 2048, 00:24:20.632 "data_size": 63488 00:24:20.632 } 00:24:20.632 ] 00:24:20.632 }' 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.632 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.891 [2024-11-06 09:17:19.731122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:20.891 [2024-11-06 09:17:19.731341] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:20.891 [2024-11-06 09:17:19.731365] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:20.891 request: 00:24:20.891 { 00:24:20.891 "base_bdev": "BaseBdev1", 00:24:20.891 "raid_bdev": "raid_bdev1", 00:24:20.891 "method": "bdev_raid_add_base_bdev", 00:24:20.891 "req_id": 1 00:24:20.891 } 00:24:20.891 Got JSON-RPC error response 00:24:20.891 response: 00:24:20.891 { 00:24:20.891 "code": -22, 00:24:20.891 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:20.891 } 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:20.891 09:17:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.835 "name": "raid_bdev1", 00:24:21.835 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:21.835 "strip_size_kb": 64, 00:24:21.835 "state": "online", 00:24:21.835 "raid_level": "raid5f", 00:24:21.835 "superblock": true, 00:24:21.835 "num_base_bdevs": 4, 00:24:21.835 "num_base_bdevs_discovered": 3, 00:24:21.835 "num_base_bdevs_operational": 3, 00:24:21.835 "base_bdevs_list": [ 00:24:21.835 { 00:24:21.835 "name": null, 00:24:21.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.835 "is_configured": false, 00:24:21.835 "data_offset": 0, 00:24:21.835 "data_size": 63488 00:24:21.835 }, 00:24:21.835 { 00:24:21.835 "name": "BaseBdev2", 00:24:21.835 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:21.835 "is_configured": true, 00:24:21.835 "data_offset": 2048, 00:24:21.835 "data_size": 63488 00:24:21.835 }, 00:24:21.835 { 00:24:21.835 "name": "BaseBdev3", 00:24:21.835 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:21.835 "is_configured": true, 00:24:21.835 "data_offset": 2048, 00:24:21.835 "data_size": 63488 00:24:21.835 }, 00:24:21.835 { 00:24:21.835 "name": "BaseBdev4", 00:24:21.835 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:21.835 "is_configured": true, 00:24:21.835 "data_offset": 2048, 00:24:21.835 "data_size": 63488 00:24:21.835 } 00:24:21.835 ] 00:24:21.835 }' 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.835 09:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.402 "name": "raid_bdev1", 00:24:22.402 "uuid": "c1ace16f-b258-45f3-9a95-c2dda0fb1a5c", 00:24:22.402 "strip_size_kb": 64, 00:24:22.402 "state": "online", 00:24:22.402 "raid_level": "raid5f", 00:24:22.402 "superblock": true, 00:24:22.402 "num_base_bdevs": 4, 00:24:22.402 "num_base_bdevs_discovered": 3, 00:24:22.402 "num_base_bdevs_operational": 3, 00:24:22.402 "base_bdevs_list": [ 00:24:22.402 { 00:24:22.402 "name": null, 00:24:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.402 "is_configured": false, 00:24:22.402 "data_offset": 0, 00:24:22.402 "data_size": 63488 00:24:22.402 }, 00:24:22.402 { 00:24:22.402 "name": "BaseBdev2", 00:24:22.402 "uuid": "44b26de2-43b9-5bb2-adda-b104bc076d70", 00:24:22.402 "is_configured": true, 00:24:22.402 "data_offset": 2048, 00:24:22.402 "data_size": 63488 00:24:22.402 }, 00:24:22.402 { 00:24:22.402 "name": "BaseBdev3", 00:24:22.402 "uuid": "14ee8739-3490-5677-b3f7-0f9fb826073d", 00:24:22.402 "is_configured": true, 00:24:22.402 "data_offset": 2048, 00:24:22.402 "data_size": 63488 00:24:22.402 }, 00:24:22.402 { 00:24:22.402 "name": "BaseBdev4", 00:24:22.402 "uuid": "107f4c9f-8926-595c-8edb-33533afc98fd", 00:24:22.402 "is_configured": true, 00:24:22.402 "data_offset": 2048, 00:24:22.402 "data_size": 63488 00:24:22.402 } 00:24:22.402 ] 00:24:22.402 }' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84835 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 84835 ']' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 84835 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84835 00:24:22.402 killing process with pid 84835 00:24:22.402 Received shutdown signal, test time was about 60.000000 seconds 00:24:22.402 00:24:22.402 Latency(us) 00:24:22.402 [2024-11-06T09:17:21.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.402 [2024-11-06T09:17:21.442Z] =================================================================================================================== 00:24:22.402 [2024-11-06T09:17:21.442Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84835' 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 84835 00:24:22.402 [2024-11-06 09:17:21.349502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:22.402 09:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 84835 00:24:22.402 [2024-11-06 09:17:21.349652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.402 [2024-11-06 09:17:21.349737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:22.402 [2024-11-06 09:17:21.349754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:22.996 [2024-11-06 09:17:21.862762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.374 ************************************ 00:24:24.374 END TEST raid5f_rebuild_test_sb 00:24:24.374 ************************************ 00:24:24.374 09:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:24.374 00:24:24.374 real 0m27.717s 00:24:24.374 user 0m34.777s 00:24:24.374 sys 0m3.622s 00:24:24.374 09:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:24.374 09:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.374 09:17:23 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:24:24.374 09:17:23 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:24.374 09:17:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:24.374 09:17:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:24.374 09:17:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:24.374 ************************************ 00:24:24.374 START TEST raid_state_function_test_sb_4k 00:24:24.374 ************************************ 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:24.374 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85651 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85651' 00:24:24.375 Process raid pid: 85651 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85651 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 85651 ']' 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.375 09:17:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.375 [2024-11-06 09:17:23.200229] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:24:24.375 [2024-11-06 09:17:23.200386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.375 [2024-11-06 09:17:23.368873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.634 [2024-11-06 09:17:23.495205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.934 [2024-11-06 09:17:23.709742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.934 [2024-11-06 09:17:23.709801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.193 [2024-11-06 09:17:24.120001] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.193 [2024-11-06 09:17:24.120072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.193 [2024-11-06 09:17:24.120085] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.193 [2024-11-06 09:17:24.120099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.193 "name": "Existed_Raid", 00:24:25.193 "uuid": "ea02bc11-7fd7-4253-a7ab-6e5a1873a518", 00:24:25.193 "strip_size_kb": 0, 00:24:25.193 "state": "configuring", 00:24:25.193 "raid_level": "raid1", 00:24:25.193 "superblock": true, 00:24:25.193 "num_base_bdevs": 2, 00:24:25.193 "num_base_bdevs_discovered": 0, 00:24:25.193 "num_base_bdevs_operational": 2, 00:24:25.193 "base_bdevs_list": [ 00:24:25.193 { 00:24:25.193 "name": "BaseBdev1", 00:24:25.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.193 "is_configured": false, 00:24:25.193 "data_offset": 0, 00:24:25.193 "data_size": 0 00:24:25.193 }, 00:24:25.193 { 00:24:25.193 "name": "BaseBdev2", 00:24:25.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.193 "is_configured": false, 00:24:25.193 "data_offset": 0, 00:24:25.193 "data_size": 0 00:24:25.193 } 00:24:25.193 ] 00:24:25.193 }' 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.193 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.760 [2024-11-06 09:17:24.507476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:25.760 [2024-11-06 09:17:24.507523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.760 [2024-11-06 09:17:24.519543] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.760 [2024-11-06 09:17:24.519612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.760 [2024-11-06 09:17:24.519624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.760 [2024-11-06 09:17:24.519641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.760 [2024-11-06 09:17:24.568136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.760 BaseBdev1 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:25.760 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.761 [ 00:24:25.761 { 00:24:25.761 "name": "BaseBdev1", 00:24:25.761 "aliases": [ 00:24:25.761 "5ed647a8-a4ae-4fe9-8b7f-96b5b47e518c" 00:24:25.761 ], 00:24:25.761 "product_name": "Malloc disk", 00:24:25.761 "block_size": 4096, 00:24:25.761 "num_blocks": 8192, 00:24:25.761 "uuid": "5ed647a8-a4ae-4fe9-8b7f-96b5b47e518c", 00:24:25.761 "assigned_rate_limits": { 00:24:25.761 "rw_ios_per_sec": 0, 00:24:25.761 "rw_mbytes_per_sec": 0, 00:24:25.761 "r_mbytes_per_sec": 0, 00:24:25.761 "w_mbytes_per_sec": 0 00:24:25.761 }, 00:24:25.761 "claimed": true, 00:24:25.761 "claim_type": "exclusive_write", 00:24:25.761 "zoned": false, 00:24:25.761 "supported_io_types": { 00:24:25.761 "read": true, 00:24:25.761 "write": true, 00:24:25.761 "unmap": true, 00:24:25.761 "flush": true, 00:24:25.761 "reset": true, 00:24:25.761 "nvme_admin": false, 00:24:25.761 "nvme_io": false, 00:24:25.761 "nvme_io_md": false, 00:24:25.761 "write_zeroes": true, 00:24:25.761 "zcopy": true, 00:24:25.761 "get_zone_info": false, 00:24:25.761 "zone_management": false, 00:24:25.761 "zone_append": false, 00:24:25.761 "compare": false, 00:24:25.761 "compare_and_write": false, 00:24:25.761 "abort": true, 00:24:25.761 "seek_hole": false, 00:24:25.761 "seek_data": false, 00:24:25.761 "copy": true, 00:24:25.761 "nvme_iov_md": false 00:24:25.761 }, 00:24:25.761 "memory_domains": [ 00:24:25.761 { 00:24:25.761 "dma_device_id": "system", 00:24:25.761 "dma_device_type": 1 00:24:25.761 }, 00:24:25.761 { 00:24:25.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.761 "dma_device_type": 2 00:24:25.761 } 00:24:25.761 ], 00:24:25.761 "driver_specific": {} 00:24:25.761 } 00:24:25.761 ] 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.761 "name": "Existed_Raid", 00:24:25.761 "uuid": "951a0c69-47c7-4174-9950-b94485a18777", 00:24:25.761 "strip_size_kb": 0, 00:24:25.761 "state": "configuring", 00:24:25.761 "raid_level": "raid1", 00:24:25.761 "superblock": true, 00:24:25.761 "num_base_bdevs": 2, 00:24:25.761 "num_base_bdevs_discovered": 1, 00:24:25.761 "num_base_bdevs_operational": 2, 00:24:25.761 "base_bdevs_list": [ 00:24:25.761 { 00:24:25.761 "name": "BaseBdev1", 00:24:25.761 "uuid": "5ed647a8-a4ae-4fe9-8b7f-96b5b47e518c", 00:24:25.761 "is_configured": true, 00:24:25.761 "data_offset": 256, 00:24:25.761 "data_size": 7936 00:24:25.761 }, 00:24:25.761 { 00:24:25.761 "name": "BaseBdev2", 00:24:25.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.761 "is_configured": false, 00:24:25.761 "data_offset": 0, 00:24:25.761 "data_size": 0 00:24:25.761 } 00:24:25.761 ] 00:24:25.761 }' 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.761 09:17:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.020 [2024-11-06 09:17:25.051513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:26.020 [2024-11-06 09:17:25.051583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.020 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.279 [2024-11-06 09:17:25.063613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:26.279 [2024-11-06 09:17:25.065831] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.279 [2024-11-06 09:17:25.065891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.279 "name": "Existed_Raid", 00:24:26.279 "uuid": "7e763e76-b215-4746-8e82-b98f8e8070ea", 00:24:26.279 "strip_size_kb": 0, 00:24:26.279 "state": "configuring", 00:24:26.279 "raid_level": "raid1", 00:24:26.279 "superblock": true, 00:24:26.279 "num_base_bdevs": 2, 00:24:26.279 "num_base_bdevs_discovered": 1, 00:24:26.279 "num_base_bdevs_operational": 2, 00:24:26.279 "base_bdevs_list": [ 00:24:26.279 { 00:24:26.279 "name": "BaseBdev1", 00:24:26.279 "uuid": "5ed647a8-a4ae-4fe9-8b7f-96b5b47e518c", 00:24:26.279 "is_configured": true, 00:24:26.279 "data_offset": 256, 00:24:26.279 "data_size": 7936 00:24:26.279 }, 00:24:26.279 { 00:24:26.279 "name": "BaseBdev2", 00:24:26.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.279 "is_configured": false, 00:24:26.279 "data_offset": 0, 00:24:26.279 "data_size": 0 00:24:26.279 } 00:24:26.279 ] 00:24:26.279 }' 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.279 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 [2024-11-06 09:17:25.564539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:26.556 [2024-11-06 09:17:25.564838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:26.556 [2024-11-06 09:17:25.564856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:26.556 [2024-11-06 09:17:25.565157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:26.556 BaseBdev2 00:24:26.556 [2024-11-06 09:17:25.565339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:26.556 [2024-11-06 09:17:25.565355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:26.556 [2024-11-06 09:17:25.565530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.557 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:26.557 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.557 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.839 [ 00:24:26.839 { 00:24:26.839 "name": "BaseBdev2", 00:24:26.839 "aliases": [ 00:24:26.839 "b801c3d7-c746-4db6-8c3c-1f0ed095a55e" 00:24:26.839 ], 00:24:26.839 "product_name": "Malloc disk", 00:24:26.839 "block_size": 4096, 00:24:26.839 "num_blocks": 8192, 00:24:26.839 "uuid": "b801c3d7-c746-4db6-8c3c-1f0ed095a55e", 00:24:26.839 "assigned_rate_limits": { 00:24:26.839 "rw_ios_per_sec": 0, 00:24:26.839 "rw_mbytes_per_sec": 0, 00:24:26.839 "r_mbytes_per_sec": 0, 00:24:26.839 "w_mbytes_per_sec": 0 00:24:26.839 }, 00:24:26.839 "claimed": true, 00:24:26.839 "claim_type": "exclusive_write", 00:24:26.839 "zoned": false, 00:24:26.839 "supported_io_types": { 00:24:26.839 "read": true, 00:24:26.839 "write": true, 00:24:26.839 "unmap": true, 00:24:26.839 "flush": true, 00:24:26.839 "reset": true, 00:24:26.839 "nvme_admin": false, 00:24:26.839 "nvme_io": false, 00:24:26.839 "nvme_io_md": false, 00:24:26.839 "write_zeroes": true, 00:24:26.839 "zcopy": true, 00:24:26.839 "get_zone_info": false, 00:24:26.839 "zone_management": false, 00:24:26.839 "zone_append": false, 00:24:26.839 "compare": false, 00:24:26.839 "compare_and_write": false, 00:24:26.839 "abort": true, 00:24:26.839 "seek_hole": false, 00:24:26.839 "seek_data": false, 00:24:26.839 "copy": true, 00:24:26.839 "nvme_iov_md": false 00:24:26.839 }, 00:24:26.839 "memory_domains": [ 00:24:26.839 { 00:24:26.839 "dma_device_id": "system", 00:24:26.839 "dma_device_type": 1 00:24:26.840 }, 00:24:26.840 { 00:24:26.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.840 "dma_device_type": 2 00:24:26.840 } 00:24:26.840 ], 00:24:26.840 "driver_specific": {} 00:24:26.840 } 00:24:26.840 ] 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.840 "name": "Existed_Raid", 00:24:26.840 "uuid": "7e763e76-b215-4746-8e82-b98f8e8070ea", 00:24:26.840 "strip_size_kb": 0, 00:24:26.840 "state": "online", 00:24:26.840 "raid_level": "raid1", 00:24:26.840 "superblock": true, 00:24:26.840 "num_base_bdevs": 2, 00:24:26.840 "num_base_bdevs_discovered": 2, 00:24:26.840 "num_base_bdevs_operational": 2, 00:24:26.840 "base_bdevs_list": [ 00:24:26.840 { 00:24:26.840 "name": "BaseBdev1", 00:24:26.840 "uuid": "5ed647a8-a4ae-4fe9-8b7f-96b5b47e518c", 00:24:26.840 "is_configured": true, 00:24:26.840 "data_offset": 256, 00:24:26.840 "data_size": 7936 00:24:26.840 }, 00:24:26.840 { 00:24:26.840 "name": "BaseBdev2", 00:24:26.840 "uuid": "b801c3d7-c746-4db6-8c3c-1f0ed095a55e", 00:24:26.840 "is_configured": true, 00:24:26.840 "data_offset": 256, 00:24:26.840 "data_size": 7936 00:24:26.840 } 00:24:26.840 ] 00:24:26.840 }' 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.840 09:17:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:27.097 [2024-11-06 09:17:26.064217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.097 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:27.097 "name": "Existed_Raid", 00:24:27.097 "aliases": [ 00:24:27.097 "7e763e76-b215-4746-8e82-b98f8e8070ea" 00:24:27.097 ], 00:24:27.097 "product_name": "Raid Volume", 00:24:27.097 "block_size": 4096, 00:24:27.097 "num_blocks": 7936, 00:24:27.097 "uuid": "7e763e76-b215-4746-8e82-b98f8e8070ea", 00:24:27.097 "assigned_rate_limits": { 00:24:27.097 "rw_ios_per_sec": 0, 00:24:27.097 "rw_mbytes_per_sec": 0, 00:24:27.097 "r_mbytes_per_sec": 0, 00:24:27.097 "w_mbytes_per_sec": 0 00:24:27.097 }, 00:24:27.097 "claimed": false, 00:24:27.097 "zoned": false, 00:24:27.097 "supported_io_types": { 00:24:27.097 "read": true, 00:24:27.097 "write": true, 00:24:27.097 "unmap": false, 00:24:27.097 "flush": false, 00:24:27.097 "reset": true, 00:24:27.097 "nvme_admin": false, 00:24:27.097 "nvme_io": false, 00:24:27.097 "nvme_io_md": false, 00:24:27.097 "write_zeroes": true, 00:24:27.097 "zcopy": false, 00:24:27.097 "get_zone_info": false, 00:24:27.097 "zone_management": false, 00:24:27.097 "zone_append": false, 00:24:27.097 "compare": false, 00:24:27.097 "compare_and_write": false, 00:24:27.097 "abort": false, 00:24:27.097 "seek_hole": false, 00:24:27.097 "seek_data": false, 00:24:27.097 "copy": false, 00:24:27.097 "nvme_iov_md": false 00:24:27.097 }, 00:24:27.097 "memory_domains": [ 00:24:27.097 { 00:24:27.097 "dma_device_id": "system", 00:24:27.097 "dma_device_type": 1 00:24:27.097 }, 00:24:27.097 { 00:24:27.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.097 "dma_device_type": 2 00:24:27.097 }, 00:24:27.097 { 00:24:27.097 "dma_device_id": "system", 00:24:27.097 "dma_device_type": 1 00:24:27.097 }, 00:24:27.097 { 00:24:27.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.097 "dma_device_type": 2 00:24:27.097 } 00:24:27.097 ], 00:24:27.097 "driver_specific": { 00:24:27.097 "raid": { 00:24:27.097 "uuid": "7e763e76-b215-4746-8e82-b98f8e8070ea", 00:24:27.097 "strip_size_kb": 0, 00:24:27.097 "state": "online", 00:24:27.097 "raid_level": "raid1", 00:24:27.097 "superblock": true, 00:24:27.097 "num_base_bdevs": 2, 00:24:27.097 "num_base_bdevs_discovered": 2, 00:24:27.097 "num_base_bdevs_operational": 2, 00:24:27.097 "base_bdevs_list": [ 00:24:27.097 { 00:24:27.098 "name": "BaseBdev1", 00:24:27.098 "uuid": "5ed647a8-a4ae-4fe9-8b7f-96b5b47e518c", 00:24:27.098 "is_configured": true, 00:24:27.098 "data_offset": 256, 00:24:27.098 "data_size": 7936 00:24:27.098 }, 00:24:27.098 { 00:24:27.098 "name": "BaseBdev2", 00:24:27.098 "uuid": "b801c3d7-c746-4db6-8c3c-1f0ed095a55e", 00:24:27.098 "is_configured": true, 00:24:27.098 "data_offset": 256, 00:24:27.098 "data_size": 7936 00:24:27.098 } 00:24:27.098 ] 00:24:27.098 } 00:24:27.098 } 00:24:27.098 }' 00:24:27.098 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:27.356 BaseBdev2' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.356 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.356 [2024-11-06 09:17:26.315636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.615 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.616 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.616 "name": "Existed_Raid", 00:24:27.616 "uuid": "7e763e76-b215-4746-8e82-b98f8e8070ea", 00:24:27.616 "strip_size_kb": 0, 00:24:27.616 "state": "online", 00:24:27.616 "raid_level": "raid1", 00:24:27.616 "superblock": true, 00:24:27.616 "num_base_bdevs": 2, 00:24:27.616 "num_base_bdevs_discovered": 1, 00:24:27.616 "num_base_bdevs_operational": 1, 00:24:27.616 "base_bdevs_list": [ 00:24:27.616 { 00:24:27.616 "name": null, 00:24:27.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.616 "is_configured": false, 00:24:27.616 "data_offset": 0, 00:24:27.616 "data_size": 7936 00:24:27.616 }, 00:24:27.616 { 00:24:27.616 "name": "BaseBdev2", 00:24:27.616 "uuid": "b801c3d7-c746-4db6-8c3c-1f0ed095a55e", 00:24:27.616 "is_configured": true, 00:24:27.616 "data_offset": 256, 00:24:27.616 "data_size": 7936 00:24:27.616 } 00:24:27.616 ] 00:24:27.616 }' 00:24:27.616 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.616 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.875 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.133 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:28.133 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:28.133 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:28.133 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.133 09:17:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.133 [2024-11-06 09:17:26.936607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:28.134 [2024-11-06 09:17:26.936721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:28.134 [2024-11-06 09:17:27.033905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.134 [2024-11-06 09:17:27.034192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.134 [2024-11-06 09:17:27.034461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85651 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 85651 ']' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 85651 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85651 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.134 killing process with pid 85651 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85651' 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 85651 00:24:28.134 [2024-11-06 09:17:27.136542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.134 09:17:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 85651 00:24:28.134 [2024-11-06 09:17:27.152723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.534 09:17:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:24:29.534 00:24:29.534 real 0m5.201s 00:24:29.534 user 0m7.482s 00:24:29.534 sys 0m1.033s 00:24:29.534 ************************************ 00:24:29.534 END TEST raid_state_function_test_sb_4k 00:24:29.534 ************************************ 00:24:29.534 09:17:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:29.534 09:17:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.534 09:17:28 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:29.534 09:17:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:29.534 09:17:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:29.534 09:17:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:29.534 ************************************ 00:24:29.534 START TEST raid_superblock_test_4k 00:24:29.534 ************************************ 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85899 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85899 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 85899 ']' 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.534 09:17:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.534 [2024-11-06 09:17:28.457109] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:24:29.534 [2024-11-06 09:17:28.457244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85899 ] 00:24:29.793 [2024-11-06 09:17:28.637323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.793 [2024-11-06 09:17:28.764570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.051 [2024-11-06 09:17:28.981279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.051 [2024-11-06 09:17:28.981341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.309 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.568 malloc1 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.568 [2024-11-06 09:17:29.366499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:30.568 [2024-11-06 09:17:29.366696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.568 [2024-11-06 09:17:29.366759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:30.568 [2024-11-06 09:17:29.366850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.568 [2024-11-06 09:17:29.369673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.568 [2024-11-06 09:17:29.369830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:30.568 pt1 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.568 malloc2 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.568 [2024-11-06 09:17:29.424076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:30.568 [2024-11-06 09:17:29.424141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.568 [2024-11-06 09:17:29.424169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:30.568 [2024-11-06 09:17:29.424182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.568 [2024-11-06 09:17:29.426688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.568 [2024-11-06 09:17:29.426730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:30.568 pt2 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.568 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.568 [2024-11-06 09:17:29.436119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:30.568 [2024-11-06 09:17:29.438421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:30.569 [2024-11-06 09:17:29.438602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:30.569 [2024-11-06 09:17:29.438622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:30.569 [2024-11-06 09:17:29.438884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:30.569 [2024-11-06 09:17:29.439038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:30.569 [2024-11-06 09:17:29.439055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:30.569 [2024-11-06 09:17:29.439212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.569 "name": "raid_bdev1", 00:24:30.569 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:30.569 "strip_size_kb": 0, 00:24:30.569 "state": "online", 00:24:30.569 "raid_level": "raid1", 00:24:30.569 "superblock": true, 00:24:30.569 "num_base_bdevs": 2, 00:24:30.569 "num_base_bdevs_discovered": 2, 00:24:30.569 "num_base_bdevs_operational": 2, 00:24:30.569 "base_bdevs_list": [ 00:24:30.569 { 00:24:30.569 "name": "pt1", 00:24:30.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:30.569 "is_configured": true, 00:24:30.569 "data_offset": 256, 00:24:30.569 "data_size": 7936 00:24:30.569 }, 00:24:30.569 { 00:24:30.569 "name": "pt2", 00:24:30.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:30.569 "is_configured": true, 00:24:30.569 "data_offset": 256, 00:24:30.569 "data_size": 7936 00:24:30.569 } 00:24:30.569 ] 00:24:30.569 }' 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.569 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.135 [2024-11-06 09:17:29.895865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.135 "name": "raid_bdev1", 00:24:31.135 "aliases": [ 00:24:31.135 "d90b04ac-f473-4e6f-a082-4e209c14dfbc" 00:24:31.135 ], 00:24:31.135 "product_name": "Raid Volume", 00:24:31.135 "block_size": 4096, 00:24:31.135 "num_blocks": 7936, 00:24:31.135 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:31.135 "assigned_rate_limits": { 00:24:31.135 "rw_ios_per_sec": 0, 00:24:31.135 "rw_mbytes_per_sec": 0, 00:24:31.135 "r_mbytes_per_sec": 0, 00:24:31.135 "w_mbytes_per_sec": 0 00:24:31.135 }, 00:24:31.135 "claimed": false, 00:24:31.135 "zoned": false, 00:24:31.135 "supported_io_types": { 00:24:31.135 "read": true, 00:24:31.135 "write": true, 00:24:31.135 "unmap": false, 00:24:31.135 "flush": false, 00:24:31.135 "reset": true, 00:24:31.135 "nvme_admin": false, 00:24:31.135 "nvme_io": false, 00:24:31.135 "nvme_io_md": false, 00:24:31.135 "write_zeroes": true, 00:24:31.135 "zcopy": false, 00:24:31.135 "get_zone_info": false, 00:24:31.135 "zone_management": false, 00:24:31.135 "zone_append": false, 00:24:31.135 "compare": false, 00:24:31.135 "compare_and_write": false, 00:24:31.135 "abort": false, 00:24:31.135 "seek_hole": false, 00:24:31.135 "seek_data": false, 00:24:31.135 "copy": false, 00:24:31.135 "nvme_iov_md": false 00:24:31.135 }, 00:24:31.135 "memory_domains": [ 00:24:31.135 { 00:24:31.135 "dma_device_id": "system", 00:24:31.135 "dma_device_type": 1 00:24:31.135 }, 00:24:31.135 { 00:24:31.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.135 "dma_device_type": 2 00:24:31.135 }, 00:24:31.135 { 00:24:31.135 "dma_device_id": "system", 00:24:31.135 "dma_device_type": 1 00:24:31.135 }, 00:24:31.135 { 00:24:31.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.135 "dma_device_type": 2 00:24:31.135 } 00:24:31.135 ], 00:24:31.135 "driver_specific": { 00:24:31.135 "raid": { 00:24:31.135 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:31.135 "strip_size_kb": 0, 00:24:31.135 "state": "online", 00:24:31.135 "raid_level": "raid1", 00:24:31.135 "superblock": true, 00:24:31.135 "num_base_bdevs": 2, 00:24:31.135 "num_base_bdevs_discovered": 2, 00:24:31.135 "num_base_bdevs_operational": 2, 00:24:31.135 "base_bdevs_list": [ 00:24:31.135 { 00:24:31.135 "name": "pt1", 00:24:31.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.135 "is_configured": true, 00:24:31.135 "data_offset": 256, 00:24:31.135 "data_size": 7936 00:24:31.135 }, 00:24:31.135 { 00:24:31.135 "name": "pt2", 00:24:31.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.135 "is_configured": true, 00:24:31.135 "data_offset": 256, 00:24:31.135 "data_size": 7936 00:24:31.135 } 00:24:31.135 ] 00:24:31.135 } 00:24:31.135 } 00:24:31.135 }' 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:31.135 pt2' 00:24:31.135 09:17:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.136 [2024-11-06 09:17:30.127558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d90b04ac-f473-4e6f-a082-4e209c14dfbc 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d90b04ac-f473-4e6f-a082-4e209c14dfbc ']' 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.136 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.136 [2024-11-06 09:17:30.171170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.136 [2024-11-06 09:17:30.171325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:31.136 [2024-11-06 09:17:30.171490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:31.136 [2024-11-06 09:17:30.171585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:31.136 [2024-11-06 09:17:30.171828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:31.395 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 [2024-11-06 09:17:30.307003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:31.396 [2024-11-06 09:17:30.309431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:31.396 [2024-11-06 09:17:30.309627] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:31.396 [2024-11-06 09:17:30.309872] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:31.396 [2024-11-06 09:17:30.309992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.396 [2024-11-06 09:17:30.310073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:31.396 request: 00:24:31.396 { 00:24:31.396 "name": "raid_bdev1", 00:24:31.396 "raid_level": "raid1", 00:24:31.396 "base_bdevs": [ 00:24:31.396 "malloc1", 00:24:31.396 "malloc2" 00:24:31.396 ], 00:24:31.396 "superblock": false, 00:24:31.396 "method": "bdev_raid_create", 00:24:31.396 "req_id": 1 00:24:31.396 } 00:24:31.396 Got JSON-RPC error response 00:24:31.396 response: 00:24:31.396 { 00:24:31.396 "code": -17, 00:24:31.396 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:31.396 } 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 [2024-11-06 09:17:30.366911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:31.396 [2024-11-06 09:17:30.367114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.396 [2024-11-06 09:17:30.367142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:31.396 [2024-11-06 09:17:30.367159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.396 [2024-11-06 09:17:30.369819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.396 [2024-11-06 09:17:30.369862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:31.396 [2024-11-06 09:17:30.369967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:31.396 [2024-11-06 09:17:30.370037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:31.396 pt1 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.396 "name": "raid_bdev1", 00:24:31.396 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:31.396 "strip_size_kb": 0, 00:24:31.396 "state": "configuring", 00:24:31.396 "raid_level": "raid1", 00:24:31.396 "superblock": true, 00:24:31.396 "num_base_bdevs": 2, 00:24:31.396 "num_base_bdevs_discovered": 1, 00:24:31.396 "num_base_bdevs_operational": 2, 00:24:31.396 "base_bdevs_list": [ 00:24:31.396 { 00:24:31.396 "name": "pt1", 00:24:31.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.396 "is_configured": true, 00:24:31.396 "data_offset": 256, 00:24:31.396 "data_size": 7936 00:24:31.396 }, 00:24:31.396 { 00:24:31.396 "name": null, 00:24:31.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.396 "is_configured": false, 00:24:31.396 "data_offset": 256, 00:24:31.396 "data_size": 7936 00:24:31.396 } 00:24:31.396 ] 00:24:31.396 }' 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.396 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.966 [2024-11-06 09:17:30.806411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:31.966 [2024-11-06 09:17:30.806663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.966 [2024-11-06 09:17:30.806697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:31.966 [2024-11-06 09:17:30.806713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.966 [2024-11-06 09:17:30.807252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.966 [2024-11-06 09:17:30.807311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:31.966 [2024-11-06 09:17:30.807405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:31.966 [2024-11-06 09:17:30.807436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:31.966 [2024-11-06 09:17:30.807573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:31.966 [2024-11-06 09:17:30.807592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:31.966 [2024-11-06 09:17:30.807852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:31.966 [2024-11-06 09:17:30.808003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:31.966 [2024-11-06 09:17:30.808014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:31.966 [2024-11-06 09:17:30.808158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.966 pt2 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.966 "name": "raid_bdev1", 00:24:31.966 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:31.966 "strip_size_kb": 0, 00:24:31.966 "state": "online", 00:24:31.966 "raid_level": "raid1", 00:24:31.966 "superblock": true, 00:24:31.966 "num_base_bdevs": 2, 00:24:31.966 "num_base_bdevs_discovered": 2, 00:24:31.966 "num_base_bdevs_operational": 2, 00:24:31.966 "base_bdevs_list": [ 00:24:31.966 { 00:24:31.966 "name": "pt1", 00:24:31.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.966 "is_configured": true, 00:24:31.966 "data_offset": 256, 00:24:31.966 "data_size": 7936 00:24:31.966 }, 00:24:31.966 { 00:24:31.966 "name": "pt2", 00:24:31.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.966 "is_configured": true, 00:24:31.966 "data_offset": 256, 00:24:31.966 "data_size": 7936 00:24:31.966 } 00:24:31.966 ] 00:24:31.966 }' 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.966 09:17:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:32.225 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.225 [2024-11-06 09:17:31.246689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.485 "name": "raid_bdev1", 00:24:32.485 "aliases": [ 00:24:32.485 "d90b04ac-f473-4e6f-a082-4e209c14dfbc" 00:24:32.485 ], 00:24:32.485 "product_name": "Raid Volume", 00:24:32.485 "block_size": 4096, 00:24:32.485 "num_blocks": 7936, 00:24:32.485 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:32.485 "assigned_rate_limits": { 00:24:32.485 "rw_ios_per_sec": 0, 00:24:32.485 "rw_mbytes_per_sec": 0, 00:24:32.485 "r_mbytes_per_sec": 0, 00:24:32.485 "w_mbytes_per_sec": 0 00:24:32.485 }, 00:24:32.485 "claimed": false, 00:24:32.485 "zoned": false, 00:24:32.485 "supported_io_types": { 00:24:32.485 "read": true, 00:24:32.485 "write": true, 00:24:32.485 "unmap": false, 00:24:32.485 "flush": false, 00:24:32.485 "reset": true, 00:24:32.485 "nvme_admin": false, 00:24:32.485 "nvme_io": false, 00:24:32.485 "nvme_io_md": false, 00:24:32.485 "write_zeroes": true, 00:24:32.485 "zcopy": false, 00:24:32.485 "get_zone_info": false, 00:24:32.485 "zone_management": false, 00:24:32.485 "zone_append": false, 00:24:32.485 "compare": false, 00:24:32.485 "compare_and_write": false, 00:24:32.485 "abort": false, 00:24:32.485 "seek_hole": false, 00:24:32.485 "seek_data": false, 00:24:32.485 "copy": false, 00:24:32.485 "nvme_iov_md": false 00:24:32.485 }, 00:24:32.485 "memory_domains": [ 00:24:32.485 { 00:24:32.485 "dma_device_id": "system", 00:24:32.485 "dma_device_type": 1 00:24:32.485 }, 00:24:32.485 { 00:24:32.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.485 "dma_device_type": 2 00:24:32.485 }, 00:24:32.485 { 00:24:32.485 "dma_device_id": "system", 00:24:32.485 "dma_device_type": 1 00:24:32.485 }, 00:24:32.485 { 00:24:32.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.485 "dma_device_type": 2 00:24:32.485 } 00:24:32.485 ], 00:24:32.485 "driver_specific": { 00:24:32.485 "raid": { 00:24:32.485 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:32.485 "strip_size_kb": 0, 00:24:32.485 "state": "online", 00:24:32.485 "raid_level": "raid1", 00:24:32.485 "superblock": true, 00:24:32.485 "num_base_bdevs": 2, 00:24:32.485 "num_base_bdevs_discovered": 2, 00:24:32.485 "num_base_bdevs_operational": 2, 00:24:32.485 "base_bdevs_list": [ 00:24:32.485 { 00:24:32.485 "name": "pt1", 00:24:32.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.485 "is_configured": true, 00:24:32.485 "data_offset": 256, 00:24:32.485 "data_size": 7936 00:24:32.485 }, 00:24:32.485 { 00:24:32.485 "name": "pt2", 00:24:32.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.485 "is_configured": true, 00:24:32.485 "data_offset": 256, 00:24:32.485 "data_size": 7936 00:24:32.485 } 00:24:32.485 ] 00:24:32.485 } 00:24:32.485 } 00:24:32.485 }' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:32.485 pt2' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.485 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.485 [2024-11-06 09:17:31.494681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:32.744 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d90b04ac-f473-4e6f-a082-4e209c14dfbc '!=' d90b04ac-f473-4e6f-a082-4e209c14dfbc ']' 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.745 [2024-11-06 09:17:31.534473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.745 "name": "raid_bdev1", 00:24:32.745 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:32.745 "strip_size_kb": 0, 00:24:32.745 "state": "online", 00:24:32.745 "raid_level": "raid1", 00:24:32.745 "superblock": true, 00:24:32.745 "num_base_bdevs": 2, 00:24:32.745 "num_base_bdevs_discovered": 1, 00:24:32.745 "num_base_bdevs_operational": 1, 00:24:32.745 "base_bdevs_list": [ 00:24:32.745 { 00:24:32.745 "name": null, 00:24:32.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.745 "is_configured": false, 00:24:32.745 "data_offset": 0, 00:24:32.745 "data_size": 7936 00:24:32.745 }, 00:24:32.745 { 00:24:32.745 "name": "pt2", 00:24:32.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.745 "is_configured": true, 00:24:32.745 "data_offset": 256, 00:24:32.745 "data_size": 7936 00:24:32.745 } 00:24:32.745 ] 00:24:32.745 }' 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.745 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.003 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:33.003 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.003 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.003 [2024-11-06 09:17:31.974462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.004 [2024-11-06 09:17:31.974504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.004 [2024-11-06 09:17:31.974596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.004 [2024-11-06 09:17:31.974648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.004 [2024-11-06 09:17:31.974664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:33.004 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.004 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.004 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.004 09:17:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:33.004 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.004 09:17:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.004 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.264 [2024-11-06 09:17:32.046416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:33.264 [2024-11-06 09:17:32.046493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.264 [2024-11-06 09:17:32.046516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:33.264 [2024-11-06 09:17:32.046531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.264 [2024-11-06 09:17:32.049297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.264 [2024-11-06 09:17:32.049360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:33.264 [2024-11-06 09:17:32.049462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:33.264 [2024-11-06 09:17:32.049517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:33.264 [2024-11-06 09:17:32.049639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:33.264 [2024-11-06 09:17:32.049655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:33.264 [2024-11-06 09:17:32.049911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:33.264 [2024-11-06 09:17:32.050062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:33.264 [2024-11-06 09:17:32.050072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:33.264 [2024-11-06 09:17:32.050331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.264 pt2 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.264 "name": "raid_bdev1", 00:24:33.264 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:33.264 "strip_size_kb": 0, 00:24:33.264 "state": "online", 00:24:33.264 "raid_level": "raid1", 00:24:33.264 "superblock": true, 00:24:33.264 "num_base_bdevs": 2, 00:24:33.264 "num_base_bdevs_discovered": 1, 00:24:33.264 "num_base_bdevs_operational": 1, 00:24:33.264 "base_bdevs_list": [ 00:24:33.264 { 00:24:33.264 "name": null, 00:24:33.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.264 "is_configured": false, 00:24:33.264 "data_offset": 256, 00:24:33.264 "data_size": 7936 00:24:33.264 }, 00:24:33.264 { 00:24:33.264 "name": "pt2", 00:24:33.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.264 "is_configured": true, 00:24:33.264 "data_offset": 256, 00:24:33.264 "data_size": 7936 00:24:33.264 } 00:24:33.264 ] 00:24:33.264 }' 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.264 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.525 [2024-11-06 09:17:32.502398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.525 [2024-11-06 09:17:32.502589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.525 [2024-11-06 09:17:32.502695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.525 [2024-11-06 09:17:32.502757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.525 [2024-11-06 09:17:32.502772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.525 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.784 [2024-11-06 09:17:32.566430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:33.784 [2024-11-06 09:17:32.566502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.784 [2024-11-06 09:17:32.566527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:33.784 [2024-11-06 09:17:32.566540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.784 [2024-11-06 09:17:32.569227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.784 [2024-11-06 09:17:32.569270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:33.784 [2024-11-06 09:17:32.569402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:33.784 [2024-11-06 09:17:32.569453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:33.784 [2024-11-06 09:17:32.569606] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:33.784 [2024-11-06 09:17:32.569619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.784 [2024-11-06 09:17:32.569638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:33.784 [2024-11-06 09:17:32.569710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:33.784 [2024-11-06 09:17:32.569807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:33.784 [2024-11-06 09:17:32.569817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:33.784 [2024-11-06 09:17:32.570100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:33.784 [2024-11-06 09:17:32.570327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:33.784 [2024-11-06 09:17:32.570353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:33.784 [2024-11-06 09:17:32.570572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.784 pt1 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.784 "name": "raid_bdev1", 00:24:33.784 "uuid": "d90b04ac-f473-4e6f-a082-4e209c14dfbc", 00:24:33.784 "strip_size_kb": 0, 00:24:33.784 "state": "online", 00:24:33.784 "raid_level": "raid1", 00:24:33.784 "superblock": true, 00:24:33.784 "num_base_bdevs": 2, 00:24:33.784 "num_base_bdevs_discovered": 1, 00:24:33.784 "num_base_bdevs_operational": 1, 00:24:33.784 "base_bdevs_list": [ 00:24:33.784 { 00:24:33.784 "name": null, 00:24:33.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.784 "is_configured": false, 00:24:33.784 "data_offset": 256, 00:24:33.784 "data_size": 7936 00:24:33.784 }, 00:24:33.784 { 00:24:33.784 "name": "pt2", 00:24:33.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.784 "is_configured": true, 00:24:33.784 "data_offset": 256, 00:24:33.784 "data_size": 7936 00:24:33.784 } 00:24:33.784 ] 00:24:33.784 }' 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.784 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.044 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:34.044 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.044 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.044 09:17:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:34.044 09:17:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.044 [2024-11-06 09:17:33.042652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d90b04ac-f473-4e6f-a082-4e209c14dfbc '!=' d90b04ac-f473-4e6f-a082-4e209c14dfbc ']' 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85899 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 85899 ']' 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 85899 00:24:34.044 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85899 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:34.303 killing process with pid 85899 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85899' 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 85899 00:24:34.303 09:17:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 85899 00:24:34.303 [2024-11-06 09:17:33.123327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:34.303 [2024-11-06 09:17:33.123451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:34.303 [2024-11-06 09:17:33.123502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:34.303 [2024-11-06 09:17:33.123521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:34.561 [2024-11-06 09:17:33.346492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.516 ************************************ 00:24:35.516 END TEST raid_superblock_test_4k 00:24:35.516 ************************************ 00:24:35.516 09:17:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:24:35.516 00:24:35.516 real 0m6.138s 00:24:35.516 user 0m9.231s 00:24:35.516 sys 0m1.256s 00:24:35.516 09:17:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:35.516 09:17:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.775 09:17:34 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:24:35.775 09:17:34 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:35.775 09:17:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:35.775 09:17:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:35.775 09:17:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.775 ************************************ 00:24:35.775 START TEST raid_rebuild_test_sb_4k 00:24:35.775 ************************************ 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:35.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86222 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86222 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86222 ']' 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:35.775 09:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.775 [2024-11-06 09:17:34.673512] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:24:35.775 [2024-11-06 09:17:34.673781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:24:35.775 Zero copy mechanism will not be used. 00:24:35.775 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86222 ] 00:24:36.034 [2024-11-06 09:17:34.859426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.034 [2024-11-06 09:17:34.980456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.293 [2024-11-06 09:17:35.195399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.293 [2024-11-06 09:17:35.195616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.553 BaseBdev1_malloc 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.553 [2024-11-06 09:17:35.569467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:36.553 [2024-11-06 09:17:35.569650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.553 [2024-11-06 09:17:35.569684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:36.553 [2024-11-06 09:17:35.569700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.553 [2024-11-06 09:17:35.572246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.553 [2024-11-06 09:17:35.572420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:36.553 BaseBdev1 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.553 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 BaseBdev2_malloc 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 [2024-11-06 09:17:35.623840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:36.813 [2024-11-06 09:17:35.623898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.813 [2024-11-06 09:17:35.623919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:36.813 [2024-11-06 09:17:35.623933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.813 [2024-11-06 09:17:35.626355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.813 [2024-11-06 09:17:35.626398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:36.813 BaseBdev2 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 spare_malloc 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 spare_delay 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 [2024-11-06 09:17:35.693541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:36.813 [2024-11-06 09:17:35.693700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.813 [2024-11-06 09:17:35.693729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:36.813 [2024-11-06 09:17:35.693743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.813 [2024-11-06 09:17:35.696304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.813 [2024-11-06 09:17:35.696356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:36.813 spare 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 [2024-11-06 09:17:35.705620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:36.813 [2024-11-06 09:17:35.707972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:36.813 [2024-11-06 09:17:35.708310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:36.813 [2024-11-06 09:17:35.708431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:36.813 [2024-11-06 09:17:35.708751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:36.813 [2024-11-06 09:17:35.708921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:36.813 [2024-11-06 09:17:35.708932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:36.813 [2024-11-06 09:17:35.709124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:36.813 "name": "raid_bdev1", 00:24:36.813 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:36.813 "strip_size_kb": 0, 00:24:36.813 "state": "online", 00:24:36.813 "raid_level": "raid1", 00:24:36.813 "superblock": true, 00:24:36.813 "num_base_bdevs": 2, 00:24:36.813 "num_base_bdevs_discovered": 2, 00:24:36.813 "num_base_bdevs_operational": 2, 00:24:36.813 "base_bdevs_list": [ 00:24:36.813 { 00:24:36.813 "name": "BaseBdev1", 00:24:36.813 "uuid": "d9f94b61-06aa-5d85-aba3-e1f23bff33a4", 00:24:36.813 "is_configured": true, 00:24:36.813 "data_offset": 256, 00:24:36.813 "data_size": 7936 00:24:36.813 }, 00:24:36.813 { 00:24:36.813 "name": "BaseBdev2", 00:24:36.813 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:36.813 "is_configured": true, 00:24:36.813 "data_offset": 256, 00:24:36.813 "data_size": 7936 00:24:36.813 } 00:24:36.813 ] 00:24:36.813 }' 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:36.813 09:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.382 [2024-11-06 09:17:36.157288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:37.382 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:37.642 [2024-11-06 09:17:36.452621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:37.642 /dev/nbd0 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.642 1+0 records in 00:24:37.642 1+0 records out 00:24:37.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504247 s, 8.1 MB/s 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:37.642 09:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:38.578 7936+0 records in 00:24:38.578 7936+0 records out 00:24:38.578 32505856 bytes (33 MB, 31 MiB) copied, 0.774964 s, 41.9 MB/s 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:38.578 [2024-11-06 09:17:37.537624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.578 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.579 [2024-11-06 09:17:37.553713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.579 "name": "raid_bdev1", 00:24:38.579 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:38.579 "strip_size_kb": 0, 00:24:38.579 "state": "online", 00:24:38.579 "raid_level": "raid1", 00:24:38.579 "superblock": true, 00:24:38.579 "num_base_bdevs": 2, 00:24:38.579 "num_base_bdevs_discovered": 1, 00:24:38.579 "num_base_bdevs_operational": 1, 00:24:38.579 "base_bdevs_list": [ 00:24:38.579 { 00:24:38.579 "name": null, 00:24:38.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.579 "is_configured": false, 00:24:38.579 "data_offset": 0, 00:24:38.579 "data_size": 7936 00:24:38.579 }, 00:24:38.579 { 00:24:38.579 "name": "BaseBdev2", 00:24:38.579 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:38.579 "is_configured": true, 00:24:38.579 "data_offset": 256, 00:24:38.579 "data_size": 7936 00:24:38.579 } 00:24:38.579 ] 00:24:38.579 }' 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.579 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.188 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:39.188 09:17:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.188 09:17:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.188 [2024-11-06 09:17:38.005218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.188 [2024-11-06 09:17:38.022482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:39.188 09:17:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.188 09:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:39.188 [2024-11-06 09:17:38.024716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.126 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.127 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.127 "name": "raid_bdev1", 00:24:40.127 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:40.127 "strip_size_kb": 0, 00:24:40.127 "state": "online", 00:24:40.127 "raid_level": "raid1", 00:24:40.127 "superblock": true, 00:24:40.127 "num_base_bdevs": 2, 00:24:40.127 "num_base_bdevs_discovered": 2, 00:24:40.127 "num_base_bdevs_operational": 2, 00:24:40.127 "process": { 00:24:40.127 "type": "rebuild", 00:24:40.127 "target": "spare", 00:24:40.127 "progress": { 00:24:40.127 "blocks": 2560, 00:24:40.127 "percent": 32 00:24:40.127 } 00:24:40.127 }, 00:24:40.127 "base_bdevs_list": [ 00:24:40.127 { 00:24:40.127 "name": "spare", 00:24:40.127 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:40.127 "is_configured": true, 00:24:40.127 "data_offset": 256, 00:24:40.127 "data_size": 7936 00:24:40.127 }, 00:24:40.127 { 00:24:40.127 "name": "BaseBdev2", 00:24:40.127 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:40.127 "is_configured": true, 00:24:40.127 "data_offset": 256, 00:24:40.127 "data_size": 7936 00:24:40.127 } 00:24:40.127 ] 00:24:40.127 }' 00:24:40.127 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.127 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.127 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.386 [2024-11-06 09:17:39.171764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.386 [2024-11-06 09:17:39.230622] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:40.386 [2024-11-06 09:17:39.230702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.386 [2024-11-06 09:17:39.230719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.386 [2024-11-06 09:17:39.230730] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.386 "name": "raid_bdev1", 00:24:40.386 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:40.386 "strip_size_kb": 0, 00:24:40.386 "state": "online", 00:24:40.386 "raid_level": "raid1", 00:24:40.386 "superblock": true, 00:24:40.386 "num_base_bdevs": 2, 00:24:40.386 "num_base_bdevs_discovered": 1, 00:24:40.386 "num_base_bdevs_operational": 1, 00:24:40.386 "base_bdevs_list": [ 00:24:40.386 { 00:24:40.386 "name": null, 00:24:40.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.386 "is_configured": false, 00:24:40.386 "data_offset": 0, 00:24:40.386 "data_size": 7936 00:24:40.386 }, 00:24:40.386 { 00:24:40.386 "name": "BaseBdev2", 00:24:40.386 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:40.386 "is_configured": true, 00:24:40.386 "data_offset": 256, 00:24:40.386 "data_size": 7936 00:24:40.386 } 00:24:40.386 ] 00:24:40.386 }' 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.386 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.955 "name": "raid_bdev1", 00:24:40.955 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:40.955 "strip_size_kb": 0, 00:24:40.955 "state": "online", 00:24:40.955 "raid_level": "raid1", 00:24:40.955 "superblock": true, 00:24:40.955 "num_base_bdevs": 2, 00:24:40.955 "num_base_bdevs_discovered": 1, 00:24:40.955 "num_base_bdevs_operational": 1, 00:24:40.955 "base_bdevs_list": [ 00:24:40.955 { 00:24:40.955 "name": null, 00:24:40.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.955 "is_configured": false, 00:24:40.955 "data_offset": 0, 00:24:40.955 "data_size": 7936 00:24:40.955 }, 00:24:40.955 { 00:24:40.955 "name": "BaseBdev2", 00:24:40.955 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:40.955 "is_configured": true, 00:24:40.955 "data_offset": 256, 00:24:40.955 "data_size": 7936 00:24:40.955 } 00:24:40.955 ] 00:24:40.955 }' 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:40.955 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.956 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:40.956 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.956 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.956 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.956 [2024-11-06 09:17:39.871441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.956 [2024-11-06 09:17:39.887315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:40.956 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.956 09:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:40.956 [2024-11-06 09:17:39.889544] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:41.893 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.893 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.894 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.153 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.153 "name": "raid_bdev1", 00:24:42.153 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:42.153 "strip_size_kb": 0, 00:24:42.153 "state": "online", 00:24:42.153 "raid_level": "raid1", 00:24:42.153 "superblock": true, 00:24:42.153 "num_base_bdevs": 2, 00:24:42.153 "num_base_bdevs_discovered": 2, 00:24:42.153 "num_base_bdevs_operational": 2, 00:24:42.153 "process": { 00:24:42.153 "type": "rebuild", 00:24:42.153 "target": "spare", 00:24:42.153 "progress": { 00:24:42.153 "blocks": 2560, 00:24:42.153 "percent": 32 00:24:42.153 } 00:24:42.153 }, 00:24:42.153 "base_bdevs_list": [ 00:24:42.153 { 00:24:42.153 "name": "spare", 00:24:42.153 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:42.153 "is_configured": true, 00:24:42.153 "data_offset": 256, 00:24:42.153 "data_size": 7936 00:24:42.153 }, 00:24:42.153 { 00:24:42.153 "name": "BaseBdev2", 00:24:42.153 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:42.153 "is_configured": true, 00:24:42.153 "data_offset": 256, 00:24:42.153 "data_size": 7936 00:24:42.153 } 00:24:42.153 ] 00:24:42.153 }' 00:24:42.153 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.153 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.153 09:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.153 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.153 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:42.153 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:42.153 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:42.153 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=676 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.154 "name": "raid_bdev1", 00:24:42.154 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:42.154 "strip_size_kb": 0, 00:24:42.154 "state": "online", 00:24:42.154 "raid_level": "raid1", 00:24:42.154 "superblock": true, 00:24:42.154 "num_base_bdevs": 2, 00:24:42.154 "num_base_bdevs_discovered": 2, 00:24:42.154 "num_base_bdevs_operational": 2, 00:24:42.154 "process": { 00:24:42.154 "type": "rebuild", 00:24:42.154 "target": "spare", 00:24:42.154 "progress": { 00:24:42.154 "blocks": 2816, 00:24:42.154 "percent": 35 00:24:42.154 } 00:24:42.154 }, 00:24:42.154 "base_bdevs_list": [ 00:24:42.154 { 00:24:42.154 "name": "spare", 00:24:42.154 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:42.154 "is_configured": true, 00:24:42.154 "data_offset": 256, 00:24:42.154 "data_size": 7936 00:24:42.154 }, 00:24:42.154 { 00:24:42.154 "name": "BaseBdev2", 00:24:42.154 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:42.154 "is_configured": true, 00:24:42.154 "data_offset": 256, 00:24:42.154 "data_size": 7936 00:24:42.154 } 00:24:42.154 ] 00:24:42.154 }' 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.154 09:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.530 "name": "raid_bdev1", 00:24:43.530 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:43.530 "strip_size_kb": 0, 00:24:43.530 "state": "online", 00:24:43.530 "raid_level": "raid1", 00:24:43.530 "superblock": true, 00:24:43.530 "num_base_bdevs": 2, 00:24:43.530 "num_base_bdevs_discovered": 2, 00:24:43.530 "num_base_bdevs_operational": 2, 00:24:43.530 "process": { 00:24:43.530 "type": "rebuild", 00:24:43.530 "target": "spare", 00:24:43.530 "progress": { 00:24:43.530 "blocks": 5632, 00:24:43.530 "percent": 70 00:24:43.530 } 00:24:43.530 }, 00:24:43.530 "base_bdevs_list": [ 00:24:43.530 { 00:24:43.530 "name": "spare", 00:24:43.530 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:43.530 "is_configured": true, 00:24:43.530 "data_offset": 256, 00:24:43.530 "data_size": 7936 00:24:43.530 }, 00:24:43.530 { 00:24:43.530 "name": "BaseBdev2", 00:24:43.530 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:43.530 "is_configured": true, 00:24:43.530 "data_offset": 256, 00:24:43.530 "data_size": 7936 00:24:43.530 } 00:24:43.530 ] 00:24:43.530 }' 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.530 09:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:44.098 [2024-11-06 09:17:43.004390] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:44.098 [2024-11-06 09:17:43.004483] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:44.098 [2024-11-06 09:17:43.004608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.357 "name": "raid_bdev1", 00:24:44.357 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:44.357 "strip_size_kb": 0, 00:24:44.357 "state": "online", 00:24:44.357 "raid_level": "raid1", 00:24:44.357 "superblock": true, 00:24:44.357 "num_base_bdevs": 2, 00:24:44.357 "num_base_bdevs_discovered": 2, 00:24:44.357 "num_base_bdevs_operational": 2, 00:24:44.357 "base_bdevs_list": [ 00:24:44.357 { 00:24:44.357 "name": "spare", 00:24:44.357 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:44.357 "is_configured": true, 00:24:44.357 "data_offset": 256, 00:24:44.357 "data_size": 7936 00:24:44.357 }, 00:24:44.357 { 00:24:44.357 "name": "BaseBdev2", 00:24:44.357 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:44.357 "is_configured": true, 00:24:44.357 "data_offset": 256, 00:24:44.357 "data_size": 7936 00:24:44.357 } 00:24:44.357 ] 00:24:44.357 }' 00:24:44.357 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.616 "name": "raid_bdev1", 00:24:44.616 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:44.616 "strip_size_kb": 0, 00:24:44.616 "state": "online", 00:24:44.616 "raid_level": "raid1", 00:24:44.616 "superblock": true, 00:24:44.616 "num_base_bdevs": 2, 00:24:44.616 "num_base_bdevs_discovered": 2, 00:24:44.616 "num_base_bdevs_operational": 2, 00:24:44.616 "base_bdevs_list": [ 00:24:44.616 { 00:24:44.616 "name": "spare", 00:24:44.616 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:44.616 "is_configured": true, 00:24:44.616 "data_offset": 256, 00:24:44.616 "data_size": 7936 00:24:44.616 }, 00:24:44.616 { 00:24:44.616 "name": "BaseBdev2", 00:24:44.616 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:44.616 "is_configured": true, 00:24:44.616 "data_offset": 256, 00:24:44.616 "data_size": 7936 00:24:44.616 } 00:24:44.616 ] 00:24:44.616 }' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.616 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.875 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.875 "name": "raid_bdev1", 00:24:44.875 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:44.875 "strip_size_kb": 0, 00:24:44.875 "state": "online", 00:24:44.875 "raid_level": "raid1", 00:24:44.875 "superblock": true, 00:24:44.875 "num_base_bdevs": 2, 00:24:44.875 "num_base_bdevs_discovered": 2, 00:24:44.875 "num_base_bdevs_operational": 2, 00:24:44.875 "base_bdevs_list": [ 00:24:44.875 { 00:24:44.875 "name": "spare", 00:24:44.875 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:44.875 "is_configured": true, 00:24:44.875 "data_offset": 256, 00:24:44.875 "data_size": 7936 00:24:44.875 }, 00:24:44.875 { 00:24:44.875 "name": "BaseBdev2", 00:24:44.875 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:44.875 "is_configured": true, 00:24:44.875 "data_offset": 256, 00:24:44.875 "data_size": 7936 00:24:44.875 } 00:24:44.875 ] 00:24:44.875 }' 00:24:44.875 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.875 09:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.134 [2024-11-06 09:17:44.094453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.134 [2024-11-06 09:17:44.094493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.134 [2024-11-06 09:17:44.094581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.134 [2024-11-06 09:17:44.094651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.134 [2024-11-06 09:17:44.094665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:45.134 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:45.393 /dev/nbd0 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.393 1+0 records in 00:24:45.393 1+0 records out 00:24:45.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434864 s, 9.4 MB/s 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:24:45.393 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:45.681 /dev/nbd1 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.681 1+0 records in 00:24:45.681 1+0 records out 00:24:45.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406669 s, 10.1 MB/s 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:24:45.681 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.940 09:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.199 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.458 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.458 [2024-11-06 09:17:45.404584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:46.458 [2024-11-06 09:17:45.404650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.458 [2024-11-06 09:17:45.404679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:46.458 [2024-11-06 09:17:45.404692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.458 [2024-11-06 09:17:45.407412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.458 [2024-11-06 09:17:45.407454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:46.458 [2024-11-06 09:17:45.407570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:46.458 [2024-11-06 09:17:45.407627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.458 [2024-11-06 09:17:45.407800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:46.458 spare 00:24:46.459 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.459 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:46.459 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.459 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.717 [2024-11-06 09:17:45.507774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:46.717 [2024-11-06 09:17:45.507843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:46.717 [2024-11-06 09:17:45.508213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:46.717 [2024-11-06 09:17:45.508458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:46.717 [2024-11-06 09:17:45.508472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:46.717 [2024-11-06 09:17:45.508685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.717 "name": "raid_bdev1", 00:24:46.717 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:46.717 "strip_size_kb": 0, 00:24:46.717 "state": "online", 00:24:46.717 "raid_level": "raid1", 00:24:46.717 "superblock": true, 00:24:46.717 "num_base_bdevs": 2, 00:24:46.717 "num_base_bdevs_discovered": 2, 00:24:46.717 "num_base_bdevs_operational": 2, 00:24:46.717 "base_bdevs_list": [ 00:24:46.717 { 00:24:46.717 "name": "spare", 00:24:46.717 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:46.717 "is_configured": true, 00:24:46.717 "data_offset": 256, 00:24:46.717 "data_size": 7936 00:24:46.717 }, 00:24:46.717 { 00:24:46.717 "name": "BaseBdev2", 00:24:46.717 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:46.717 "is_configured": true, 00:24:46.717 "data_offset": 256, 00:24:46.717 "data_size": 7936 00:24:46.717 } 00:24:46.717 ] 00:24:46.717 }' 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.717 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.976 09:17:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.976 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:47.236 "name": "raid_bdev1", 00:24:47.236 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:47.236 "strip_size_kb": 0, 00:24:47.236 "state": "online", 00:24:47.236 "raid_level": "raid1", 00:24:47.236 "superblock": true, 00:24:47.236 "num_base_bdevs": 2, 00:24:47.236 "num_base_bdevs_discovered": 2, 00:24:47.236 "num_base_bdevs_operational": 2, 00:24:47.236 "base_bdevs_list": [ 00:24:47.236 { 00:24:47.236 "name": "spare", 00:24:47.236 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:47.236 "is_configured": true, 00:24:47.236 "data_offset": 256, 00:24:47.236 "data_size": 7936 00:24:47.236 }, 00:24:47.236 { 00:24:47.236 "name": "BaseBdev2", 00:24:47.236 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:47.236 "is_configured": true, 00:24:47.236 "data_offset": 256, 00:24:47.236 "data_size": 7936 00:24:47.236 } 00:24:47.236 ] 00:24:47.236 }' 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.236 [2024-11-06 09:17:46.168156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.236 "name": "raid_bdev1", 00:24:47.236 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:47.236 "strip_size_kb": 0, 00:24:47.236 "state": "online", 00:24:47.236 "raid_level": "raid1", 00:24:47.236 "superblock": true, 00:24:47.236 "num_base_bdevs": 2, 00:24:47.236 "num_base_bdevs_discovered": 1, 00:24:47.236 "num_base_bdevs_operational": 1, 00:24:47.236 "base_bdevs_list": [ 00:24:47.236 { 00:24:47.236 "name": null, 00:24:47.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.236 "is_configured": false, 00:24:47.236 "data_offset": 0, 00:24:47.236 "data_size": 7936 00:24:47.236 }, 00:24:47.236 { 00:24:47.236 "name": "BaseBdev2", 00:24:47.236 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:47.236 "is_configured": true, 00:24:47.236 "data_offset": 256, 00:24:47.236 "data_size": 7936 00:24:47.236 } 00:24:47.236 ] 00:24:47.236 }' 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.236 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:47.845 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 [2024-11-06 09:17:46.603578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.845 [2024-11-06 09:17:46.603789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:47.845 [2024-11-06 09:17:46.603813] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:47.845 [2024-11-06 09:17:46.603856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.845 [2024-11-06 09:17:46.620988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:47.845 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.845 09:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:47.845 [2024-11-06 09:17:46.623190] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.783 "name": "raid_bdev1", 00:24:48.783 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:48.783 "strip_size_kb": 0, 00:24:48.783 "state": "online", 00:24:48.783 "raid_level": "raid1", 00:24:48.783 "superblock": true, 00:24:48.783 "num_base_bdevs": 2, 00:24:48.783 "num_base_bdevs_discovered": 2, 00:24:48.783 "num_base_bdevs_operational": 2, 00:24:48.783 "process": { 00:24:48.783 "type": "rebuild", 00:24:48.783 "target": "spare", 00:24:48.783 "progress": { 00:24:48.783 "blocks": 2560, 00:24:48.783 "percent": 32 00:24:48.783 } 00:24:48.783 }, 00:24:48.783 "base_bdevs_list": [ 00:24:48.783 { 00:24:48.783 "name": "spare", 00:24:48.783 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:48.783 "is_configured": true, 00:24:48.783 "data_offset": 256, 00:24:48.783 "data_size": 7936 00:24:48.783 }, 00:24:48.783 { 00:24:48.783 "name": "BaseBdev2", 00:24:48.783 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:48.783 "is_configured": true, 00:24:48.783 "data_offset": 256, 00:24:48.783 "data_size": 7936 00:24:48.783 } 00:24:48.783 ] 00:24:48.783 }' 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.783 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:48.784 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.784 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:48.784 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:48.784 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.784 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.784 [2024-11-06 09:17:47.750866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:49.043 [2024-11-06 09:17:47.828978] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:49.043 [2024-11-06 09:17:47.829064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.043 [2024-11-06 09:17:47.829082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:49.043 [2024-11-06 09:17:47.829095] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.043 "name": "raid_bdev1", 00:24:49.043 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:49.043 "strip_size_kb": 0, 00:24:49.043 "state": "online", 00:24:49.043 "raid_level": "raid1", 00:24:49.043 "superblock": true, 00:24:49.043 "num_base_bdevs": 2, 00:24:49.043 "num_base_bdevs_discovered": 1, 00:24:49.043 "num_base_bdevs_operational": 1, 00:24:49.043 "base_bdevs_list": [ 00:24:49.043 { 00:24:49.043 "name": null, 00:24:49.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.043 "is_configured": false, 00:24:49.043 "data_offset": 0, 00:24:49.043 "data_size": 7936 00:24:49.043 }, 00:24:49.043 { 00:24:49.043 "name": "BaseBdev2", 00:24:49.043 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:49.043 "is_configured": true, 00:24:49.043 "data_offset": 256, 00:24:49.043 "data_size": 7936 00:24:49.043 } 00:24:49.043 ] 00:24:49.043 }' 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.043 09:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.302 09:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:49.302 09:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.302 09:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.302 [2024-11-06 09:17:48.314397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:49.302 [2024-11-06 09:17:48.314474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.302 [2024-11-06 09:17:48.314501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:49.302 [2024-11-06 09:17:48.314517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.302 [2024-11-06 09:17:48.315046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.302 [2024-11-06 09:17:48.315072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:49.302 [2024-11-06 09:17:48.315180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:49.302 [2024-11-06 09:17:48.315199] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:49.302 [2024-11-06 09:17:48.315212] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:49.302 [2024-11-06 09:17:48.315242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:49.302 [2024-11-06 09:17:48.332590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:49.302 spare 00:24:49.302 09:17:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.302 09:17:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:49.302 [2024-11-06 09:17:48.334994] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:50.686 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.686 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.686 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:50.686 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:50.686 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.687 "name": "raid_bdev1", 00:24:50.687 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:50.687 "strip_size_kb": 0, 00:24:50.687 "state": "online", 00:24:50.687 "raid_level": "raid1", 00:24:50.687 "superblock": true, 00:24:50.687 "num_base_bdevs": 2, 00:24:50.687 "num_base_bdevs_discovered": 2, 00:24:50.687 "num_base_bdevs_operational": 2, 00:24:50.687 "process": { 00:24:50.687 "type": "rebuild", 00:24:50.687 "target": "spare", 00:24:50.687 "progress": { 00:24:50.687 "blocks": 2560, 00:24:50.687 "percent": 32 00:24:50.687 } 00:24:50.687 }, 00:24:50.687 "base_bdevs_list": [ 00:24:50.687 { 00:24:50.687 "name": "spare", 00:24:50.687 "uuid": "add2fa9a-4f39-553d-a519-fe2261848aca", 00:24:50.687 "is_configured": true, 00:24:50.687 "data_offset": 256, 00:24:50.687 "data_size": 7936 00:24:50.687 }, 00:24:50.687 { 00:24:50.687 "name": "BaseBdev2", 00:24:50.687 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:50.687 "is_configured": true, 00:24:50.687 "data_offset": 256, 00:24:50.687 "data_size": 7936 00:24:50.687 } 00:24:50.687 ] 00:24:50.687 }' 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.687 [2024-11-06 09:17:49.478460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:50.687 [2024-11-06 09:17:49.541064] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:50.687 [2024-11-06 09:17:49.541155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.687 [2024-11-06 09:17:49.541176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:50.687 [2024-11-06 09:17:49.541185] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.687 "name": "raid_bdev1", 00:24:50.687 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:50.687 "strip_size_kb": 0, 00:24:50.687 "state": "online", 00:24:50.687 "raid_level": "raid1", 00:24:50.687 "superblock": true, 00:24:50.687 "num_base_bdevs": 2, 00:24:50.687 "num_base_bdevs_discovered": 1, 00:24:50.687 "num_base_bdevs_operational": 1, 00:24:50.687 "base_bdevs_list": [ 00:24:50.687 { 00:24:50.687 "name": null, 00:24:50.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.687 "is_configured": false, 00:24:50.687 "data_offset": 0, 00:24:50.687 "data_size": 7936 00:24:50.687 }, 00:24:50.687 { 00:24:50.687 "name": "BaseBdev2", 00:24:50.687 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:50.687 "is_configured": true, 00:24:50.687 "data_offset": 256, 00:24:50.687 "data_size": 7936 00:24:50.687 } 00:24:50.687 ] 00:24:50.687 }' 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.687 09:17:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.255 "name": "raid_bdev1", 00:24:51.255 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:51.255 "strip_size_kb": 0, 00:24:51.255 "state": "online", 00:24:51.255 "raid_level": "raid1", 00:24:51.255 "superblock": true, 00:24:51.255 "num_base_bdevs": 2, 00:24:51.255 "num_base_bdevs_discovered": 1, 00:24:51.255 "num_base_bdevs_operational": 1, 00:24:51.255 "base_bdevs_list": [ 00:24:51.255 { 00:24:51.255 "name": null, 00:24:51.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.255 "is_configured": false, 00:24:51.255 "data_offset": 0, 00:24:51.255 "data_size": 7936 00:24:51.255 }, 00:24:51.255 { 00:24:51.255 "name": "BaseBdev2", 00:24:51.255 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:51.255 "is_configured": true, 00:24:51.255 "data_offset": 256, 00:24:51.255 "data_size": 7936 00:24:51.255 } 00:24:51.255 ] 00:24:51.255 }' 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.255 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:51.256 [2024-11-06 09:17:50.171022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:51.256 [2024-11-06 09:17:50.171094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.256 [2024-11-06 09:17:50.171123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:51.256 [2024-11-06 09:17:50.171145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.256 [2024-11-06 09:17:50.171648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.256 [2024-11-06 09:17:50.171674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:51.256 [2024-11-06 09:17:50.171788] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:51.256 [2024-11-06 09:17:50.171810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:51.256 [2024-11-06 09:17:50.171827] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:51.256 [2024-11-06 09:17:50.171840] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:51.256 BaseBdev1 00:24:51.256 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.256 09:17:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:52.190 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:52.190 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.190 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.190 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.191 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.450 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.450 "name": "raid_bdev1", 00:24:52.450 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:52.450 "strip_size_kb": 0, 00:24:52.450 "state": "online", 00:24:52.450 "raid_level": "raid1", 00:24:52.450 "superblock": true, 00:24:52.450 "num_base_bdevs": 2, 00:24:52.450 "num_base_bdevs_discovered": 1, 00:24:52.450 "num_base_bdevs_operational": 1, 00:24:52.450 "base_bdevs_list": [ 00:24:52.450 { 00:24:52.450 "name": null, 00:24:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.450 "is_configured": false, 00:24:52.450 "data_offset": 0, 00:24:52.450 "data_size": 7936 00:24:52.450 }, 00:24:52.450 { 00:24:52.450 "name": "BaseBdev2", 00:24:52.450 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:52.450 "is_configured": true, 00:24:52.450 "data_offset": 256, 00:24:52.450 "data_size": 7936 00:24:52.450 } 00:24:52.450 ] 00:24:52.450 }' 00:24:52.450 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.450 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.709 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.709 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.710 "name": "raid_bdev1", 00:24:52.710 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:52.710 "strip_size_kb": 0, 00:24:52.710 "state": "online", 00:24:52.710 "raid_level": "raid1", 00:24:52.710 "superblock": true, 00:24:52.710 "num_base_bdevs": 2, 00:24:52.710 "num_base_bdevs_discovered": 1, 00:24:52.710 "num_base_bdevs_operational": 1, 00:24:52.710 "base_bdevs_list": [ 00:24:52.710 { 00:24:52.710 "name": null, 00:24:52.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.710 "is_configured": false, 00:24:52.710 "data_offset": 0, 00:24:52.710 "data_size": 7936 00:24:52.710 }, 00:24:52.710 { 00:24:52.710 "name": "BaseBdev2", 00:24:52.710 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:52.710 "is_configured": true, 00:24:52.710 "data_offset": 256, 00:24:52.710 "data_size": 7936 00:24:52.710 } 00:24:52.710 ] 00:24:52.710 }' 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:52.710 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.968 [2024-11-06 09:17:51.782411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:52.968 [2024-11-06 09:17:51.782594] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:52.968 [2024-11-06 09:17:51.782618] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:52.968 request: 00:24:52.968 { 00:24:52.968 "base_bdev": "BaseBdev1", 00:24:52.968 "raid_bdev": "raid_bdev1", 00:24:52.968 "method": "bdev_raid_add_base_bdev", 00:24:52.968 "req_id": 1 00:24:52.968 } 00:24:52.968 Got JSON-RPC error response 00:24:52.968 response: 00:24:52.968 { 00:24:52.968 "code": -22, 00:24:52.968 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:52.968 } 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:52.968 09:17:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.906 "name": "raid_bdev1", 00:24:53.906 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:53.906 "strip_size_kb": 0, 00:24:53.906 "state": "online", 00:24:53.906 "raid_level": "raid1", 00:24:53.906 "superblock": true, 00:24:53.906 "num_base_bdevs": 2, 00:24:53.906 "num_base_bdevs_discovered": 1, 00:24:53.906 "num_base_bdevs_operational": 1, 00:24:53.906 "base_bdevs_list": [ 00:24:53.906 { 00:24:53.906 "name": null, 00:24:53.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.906 "is_configured": false, 00:24:53.906 "data_offset": 0, 00:24:53.906 "data_size": 7936 00:24:53.906 }, 00:24:53.906 { 00:24:53.906 "name": "BaseBdev2", 00:24:53.906 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:53.906 "is_configured": true, 00:24:53.906 "data_offset": 256, 00:24:53.906 "data_size": 7936 00:24:53.906 } 00:24:53.906 ] 00:24:53.906 }' 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.906 09:17:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.165 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:54.165 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.165 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:54.165 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:54.165 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.165 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.424 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.424 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.424 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:54.425 "name": "raid_bdev1", 00:24:54.425 "uuid": "de73b82d-9fd9-49a5-88cd-f507b2ce1c30", 00:24:54.425 "strip_size_kb": 0, 00:24:54.425 "state": "online", 00:24:54.425 "raid_level": "raid1", 00:24:54.425 "superblock": true, 00:24:54.425 "num_base_bdevs": 2, 00:24:54.425 "num_base_bdevs_discovered": 1, 00:24:54.425 "num_base_bdevs_operational": 1, 00:24:54.425 "base_bdevs_list": [ 00:24:54.425 { 00:24:54.425 "name": null, 00:24:54.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.425 "is_configured": false, 00:24:54.425 "data_offset": 0, 00:24:54.425 "data_size": 7936 00:24:54.425 }, 00:24:54.425 { 00:24:54.425 "name": "BaseBdev2", 00:24:54.425 "uuid": "e78c6fd2-f7c2-57e9-be64-9743db31fba4", 00:24:54.425 "is_configured": true, 00:24:54.425 "data_offset": 256, 00:24:54.425 "data_size": 7936 00:24:54.425 } 00:24:54.425 ] 00:24:54.425 }' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86222 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86222 ']' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86222 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86222 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:54.425 killing process with pid 86222 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86222' 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86222 00:24:54.425 Received shutdown signal, test time was about 60.000000 seconds 00:24:54.425 00:24:54.425 Latency(us) 00:24:54.425 [2024-11-06T09:17:53.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.425 [2024-11-06T09:17:53.465Z] =================================================================================================================== 00:24:54.425 [2024-11-06T09:17:53.465Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:54.425 09:17:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86222 00:24:54.425 [2024-11-06 09:17:53.373143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:54.425 [2024-11-06 09:17:53.373286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.425 [2024-11-06 09:17:53.373360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.425 [2024-11-06 09:17:53.373377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:54.684 [2024-11-06 09:17:53.702103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:56.101 09:17:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:24:56.101 00:24:56.101 real 0m20.334s 00:24:56.101 user 0m26.292s 00:24:56.101 sys 0m3.113s 00:24:56.101 09:17:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:56.101 09:17:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.101 ************************************ 00:24:56.101 END TEST raid_rebuild_test_sb_4k 00:24:56.101 ************************************ 00:24:56.101 09:17:54 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:24:56.101 09:17:54 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:24:56.101 09:17:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:56.101 09:17:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:56.101 09:17:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:56.101 ************************************ 00:24:56.101 START TEST raid_state_function_test_sb_md_separate 00:24:56.101 ************************************ 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:56.101 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86921 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:56.102 Process raid pid: 86921 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86921' 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86921 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 86921 ']' 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.102 09:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.102 [2024-11-06 09:17:55.084681] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:24:56.102 [2024-11-06 09:17:55.084831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.360 [2024-11-06 09:17:55.270635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.619 [2024-11-06 09:17:55.407372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.619 [2024-11-06 09:17:55.643608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:56.619 [2024-11-06 09:17:55.643665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.188 [2024-11-06 09:17:55.972182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:57.188 [2024-11-06 09:17:55.972242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:57.188 [2024-11-06 09:17:55.972255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.188 [2024-11-06 09:17:55.972269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.188 09:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.188 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.188 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.188 "name": "Existed_Raid", 00:24:57.188 "uuid": "1b24af6d-2894-41a1-8fad-9cd2c6b8eb59", 00:24:57.188 "strip_size_kb": 0, 00:24:57.188 "state": "configuring", 00:24:57.188 "raid_level": "raid1", 00:24:57.188 "superblock": true, 00:24:57.188 "num_base_bdevs": 2, 00:24:57.188 "num_base_bdevs_discovered": 0, 00:24:57.188 "num_base_bdevs_operational": 2, 00:24:57.188 "base_bdevs_list": [ 00:24:57.188 { 00:24:57.188 "name": "BaseBdev1", 00:24:57.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.188 "is_configured": false, 00:24:57.188 "data_offset": 0, 00:24:57.188 "data_size": 0 00:24:57.188 }, 00:24:57.188 { 00:24:57.188 "name": "BaseBdev2", 00:24:57.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.188 "is_configured": false, 00:24:57.188 "data_offset": 0, 00:24:57.188 "data_size": 0 00:24:57.188 } 00:24:57.188 ] 00:24:57.188 }' 00:24:57.188 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.188 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 [2024-11-06 09:17:56.407538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:57.447 [2024-11-06 09:17:56.407590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 [2024-11-06 09:17:56.419547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:57.447 [2024-11-06 09:17:56.419601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:57.447 [2024-11-06 09:17:56.419612] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.447 [2024-11-06 09:17:56.419629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 [2024-11-06 09:17:56.472682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.447 BaseBdev1 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.447 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.707 [ 00:24:57.707 { 00:24:57.707 "name": "BaseBdev1", 00:24:57.707 "aliases": [ 00:24:57.707 "db363ae5-4dfd-4dd0-9221-2cd201580fba" 00:24:57.707 ], 00:24:57.707 "product_name": "Malloc disk", 00:24:57.707 "block_size": 4096, 00:24:57.707 "num_blocks": 8192, 00:24:57.707 "uuid": "db363ae5-4dfd-4dd0-9221-2cd201580fba", 00:24:57.707 "md_size": 32, 00:24:57.707 "md_interleave": false, 00:24:57.707 "dif_type": 0, 00:24:57.707 "assigned_rate_limits": { 00:24:57.707 "rw_ios_per_sec": 0, 00:24:57.707 "rw_mbytes_per_sec": 0, 00:24:57.707 "r_mbytes_per_sec": 0, 00:24:57.707 "w_mbytes_per_sec": 0 00:24:57.707 }, 00:24:57.707 "claimed": true, 00:24:57.707 "claim_type": "exclusive_write", 00:24:57.707 "zoned": false, 00:24:57.707 "supported_io_types": { 00:24:57.707 "read": true, 00:24:57.707 "write": true, 00:24:57.707 "unmap": true, 00:24:57.707 "flush": true, 00:24:57.707 "reset": true, 00:24:57.707 "nvme_admin": false, 00:24:57.707 "nvme_io": false, 00:24:57.707 "nvme_io_md": false, 00:24:57.707 "write_zeroes": true, 00:24:57.707 "zcopy": true, 00:24:57.707 "get_zone_info": false, 00:24:57.707 "zone_management": false, 00:24:57.707 "zone_append": false, 00:24:57.707 "compare": false, 00:24:57.707 "compare_and_write": false, 00:24:57.707 "abort": true, 00:24:57.707 "seek_hole": false, 00:24:57.707 "seek_data": false, 00:24:57.707 "copy": true, 00:24:57.707 "nvme_iov_md": false 00:24:57.707 }, 00:24:57.707 "memory_domains": [ 00:24:57.707 { 00:24:57.707 "dma_device_id": "system", 00:24:57.707 "dma_device_type": 1 00:24:57.707 }, 00:24:57.707 { 00:24:57.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.707 "dma_device_type": 2 00:24:57.707 } 00:24:57.707 ], 00:24:57.707 "driver_specific": {} 00:24:57.707 } 00:24:57.707 ] 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.707 "name": "Existed_Raid", 00:24:57.707 "uuid": "53d62926-3e42-4689-bbea-8cad4e1eb042", 00:24:57.707 "strip_size_kb": 0, 00:24:57.707 "state": "configuring", 00:24:57.707 "raid_level": "raid1", 00:24:57.707 "superblock": true, 00:24:57.707 "num_base_bdevs": 2, 00:24:57.707 "num_base_bdevs_discovered": 1, 00:24:57.707 "num_base_bdevs_operational": 2, 00:24:57.707 "base_bdevs_list": [ 00:24:57.707 { 00:24:57.707 "name": "BaseBdev1", 00:24:57.707 "uuid": "db363ae5-4dfd-4dd0-9221-2cd201580fba", 00:24:57.707 "is_configured": true, 00:24:57.707 "data_offset": 256, 00:24:57.707 "data_size": 7936 00:24:57.707 }, 00:24:57.707 { 00:24:57.707 "name": "BaseBdev2", 00:24:57.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.707 "is_configured": false, 00:24:57.707 "data_offset": 0, 00:24:57.707 "data_size": 0 00:24:57.707 } 00:24:57.707 ] 00:24:57.707 }' 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.707 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.966 [2024-11-06 09:17:56.928163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:57.966 [2024-11-06 09:17:56.928229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.966 [2024-11-06 09:17:56.940228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.966 [2024-11-06 09:17:56.942616] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.966 [2024-11-06 09:17:56.942671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.966 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.966 "name": "Existed_Raid", 00:24:57.966 "uuid": "e36c2bde-c860-4fe4-8896-5f5d190a87d1", 00:24:57.966 "strip_size_kb": 0, 00:24:57.966 "state": "configuring", 00:24:57.966 "raid_level": "raid1", 00:24:57.966 "superblock": true, 00:24:57.966 "num_base_bdevs": 2, 00:24:57.966 "num_base_bdevs_discovered": 1, 00:24:57.966 "num_base_bdevs_operational": 2, 00:24:57.966 "base_bdevs_list": [ 00:24:57.966 { 00:24:57.966 "name": "BaseBdev1", 00:24:57.966 "uuid": "db363ae5-4dfd-4dd0-9221-2cd201580fba", 00:24:57.966 "is_configured": true, 00:24:57.966 "data_offset": 256, 00:24:57.966 "data_size": 7936 00:24:57.966 }, 00:24:57.966 { 00:24:57.966 "name": "BaseBdev2", 00:24:57.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.967 "is_configured": false, 00:24:57.967 "data_offset": 0, 00:24:57.967 "data_size": 0 00:24:57.967 } 00:24:57.967 ] 00:24:57.967 }' 00:24:57.967 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.967 09:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.534 [2024-11-06 09:17:57.411360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:58.534 [2024-11-06 09:17:57.411635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:58.534 [2024-11-06 09:17:57.411651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:58.534 [2024-11-06 09:17:57.411741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:58.534 [2024-11-06 09:17:57.411881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:58.534 [2024-11-06 09:17:57.411894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:58.534 [2024-11-06 09:17:57.411998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.534 BaseBdev2 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.534 [ 00:24:58.534 { 00:24:58.534 "name": "BaseBdev2", 00:24:58.534 "aliases": [ 00:24:58.534 "1db9649e-b675-4089-aefb-a650807600bd" 00:24:58.534 ], 00:24:58.534 "product_name": "Malloc disk", 00:24:58.534 "block_size": 4096, 00:24:58.534 "num_blocks": 8192, 00:24:58.534 "uuid": "1db9649e-b675-4089-aefb-a650807600bd", 00:24:58.534 "md_size": 32, 00:24:58.534 "md_interleave": false, 00:24:58.534 "dif_type": 0, 00:24:58.534 "assigned_rate_limits": { 00:24:58.534 "rw_ios_per_sec": 0, 00:24:58.534 "rw_mbytes_per_sec": 0, 00:24:58.534 "r_mbytes_per_sec": 0, 00:24:58.534 "w_mbytes_per_sec": 0 00:24:58.534 }, 00:24:58.534 "claimed": true, 00:24:58.534 "claim_type": "exclusive_write", 00:24:58.534 "zoned": false, 00:24:58.534 "supported_io_types": { 00:24:58.534 "read": true, 00:24:58.534 "write": true, 00:24:58.534 "unmap": true, 00:24:58.534 "flush": true, 00:24:58.534 "reset": true, 00:24:58.534 "nvme_admin": false, 00:24:58.534 "nvme_io": false, 00:24:58.534 "nvme_io_md": false, 00:24:58.534 "write_zeroes": true, 00:24:58.534 "zcopy": true, 00:24:58.534 "get_zone_info": false, 00:24:58.534 "zone_management": false, 00:24:58.534 "zone_append": false, 00:24:58.534 "compare": false, 00:24:58.534 "compare_and_write": false, 00:24:58.534 "abort": true, 00:24:58.534 "seek_hole": false, 00:24:58.534 "seek_data": false, 00:24:58.534 "copy": true, 00:24:58.534 "nvme_iov_md": false 00:24:58.534 }, 00:24:58.534 "memory_domains": [ 00:24:58.534 { 00:24:58.534 "dma_device_id": "system", 00:24:58.534 "dma_device_type": 1 00:24:58.534 }, 00:24:58.534 { 00:24:58.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.534 "dma_device_type": 2 00:24:58.534 } 00:24:58.534 ], 00:24:58.534 "driver_specific": {} 00:24:58.534 } 00:24:58.534 ] 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.534 "name": "Existed_Raid", 00:24:58.534 "uuid": "e36c2bde-c860-4fe4-8896-5f5d190a87d1", 00:24:58.534 "strip_size_kb": 0, 00:24:58.534 "state": "online", 00:24:58.534 "raid_level": "raid1", 00:24:58.534 "superblock": true, 00:24:58.534 "num_base_bdevs": 2, 00:24:58.534 "num_base_bdevs_discovered": 2, 00:24:58.534 "num_base_bdevs_operational": 2, 00:24:58.534 "base_bdevs_list": [ 00:24:58.534 { 00:24:58.534 "name": "BaseBdev1", 00:24:58.534 "uuid": "db363ae5-4dfd-4dd0-9221-2cd201580fba", 00:24:58.534 "is_configured": true, 00:24:58.534 "data_offset": 256, 00:24:58.534 "data_size": 7936 00:24:58.534 }, 00:24:58.534 { 00:24:58.534 "name": "BaseBdev2", 00:24:58.534 "uuid": "1db9649e-b675-4089-aefb-a650807600bd", 00:24:58.534 "is_configured": true, 00:24:58.534 "data_offset": 256, 00:24:58.534 "data_size": 7936 00:24:58.534 } 00:24:58.534 ] 00:24:58.534 }' 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.534 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.102 [2024-11-06 09:17:57.903325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.102 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:59.102 "name": "Existed_Raid", 00:24:59.102 "aliases": [ 00:24:59.102 "e36c2bde-c860-4fe4-8896-5f5d190a87d1" 00:24:59.102 ], 00:24:59.102 "product_name": "Raid Volume", 00:24:59.102 "block_size": 4096, 00:24:59.102 "num_blocks": 7936, 00:24:59.102 "uuid": "e36c2bde-c860-4fe4-8896-5f5d190a87d1", 00:24:59.102 "md_size": 32, 00:24:59.102 "md_interleave": false, 00:24:59.102 "dif_type": 0, 00:24:59.102 "assigned_rate_limits": { 00:24:59.102 "rw_ios_per_sec": 0, 00:24:59.102 "rw_mbytes_per_sec": 0, 00:24:59.102 "r_mbytes_per_sec": 0, 00:24:59.102 "w_mbytes_per_sec": 0 00:24:59.102 }, 00:24:59.102 "claimed": false, 00:24:59.102 "zoned": false, 00:24:59.102 "supported_io_types": { 00:24:59.102 "read": true, 00:24:59.102 "write": true, 00:24:59.102 "unmap": false, 00:24:59.102 "flush": false, 00:24:59.102 "reset": true, 00:24:59.102 "nvme_admin": false, 00:24:59.102 "nvme_io": false, 00:24:59.102 "nvme_io_md": false, 00:24:59.102 "write_zeroes": true, 00:24:59.102 "zcopy": false, 00:24:59.102 "get_zone_info": false, 00:24:59.102 "zone_management": false, 00:24:59.102 "zone_append": false, 00:24:59.102 "compare": false, 00:24:59.102 "compare_and_write": false, 00:24:59.102 "abort": false, 00:24:59.102 "seek_hole": false, 00:24:59.102 "seek_data": false, 00:24:59.102 "copy": false, 00:24:59.102 "nvme_iov_md": false 00:24:59.102 }, 00:24:59.102 "memory_domains": [ 00:24:59.102 { 00:24:59.102 "dma_device_id": "system", 00:24:59.102 "dma_device_type": 1 00:24:59.102 }, 00:24:59.102 { 00:24:59.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.102 "dma_device_type": 2 00:24:59.102 }, 00:24:59.102 { 00:24:59.102 "dma_device_id": "system", 00:24:59.102 "dma_device_type": 1 00:24:59.102 }, 00:24:59.102 { 00:24:59.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.102 "dma_device_type": 2 00:24:59.103 } 00:24:59.103 ], 00:24:59.103 "driver_specific": { 00:24:59.103 "raid": { 00:24:59.103 "uuid": "e36c2bde-c860-4fe4-8896-5f5d190a87d1", 00:24:59.103 "strip_size_kb": 0, 00:24:59.103 "state": "online", 00:24:59.103 "raid_level": "raid1", 00:24:59.103 "superblock": true, 00:24:59.103 "num_base_bdevs": 2, 00:24:59.103 "num_base_bdevs_discovered": 2, 00:24:59.103 "num_base_bdevs_operational": 2, 00:24:59.103 "base_bdevs_list": [ 00:24:59.103 { 00:24:59.103 "name": "BaseBdev1", 00:24:59.103 "uuid": "db363ae5-4dfd-4dd0-9221-2cd201580fba", 00:24:59.103 "is_configured": true, 00:24:59.103 "data_offset": 256, 00:24:59.103 "data_size": 7936 00:24:59.103 }, 00:24:59.103 { 00:24:59.103 "name": "BaseBdev2", 00:24:59.103 "uuid": "1db9649e-b675-4089-aefb-a650807600bd", 00:24:59.103 "is_configured": true, 00:24:59.103 "data_offset": 256, 00:24:59.103 "data_size": 7936 00:24:59.103 } 00:24:59.103 ] 00:24:59.103 } 00:24:59.103 } 00:24:59.103 }' 00:24:59.103 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:59.103 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:59.103 BaseBdev2' 00:24:59.103 09:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.103 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.103 [2024-11-06 09:17:58.126737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.362 "name": "Existed_Raid", 00:24:59.362 "uuid": "e36c2bde-c860-4fe4-8896-5f5d190a87d1", 00:24:59.362 "strip_size_kb": 0, 00:24:59.362 "state": "online", 00:24:59.362 "raid_level": "raid1", 00:24:59.362 "superblock": true, 00:24:59.362 "num_base_bdevs": 2, 00:24:59.362 "num_base_bdevs_discovered": 1, 00:24:59.362 "num_base_bdevs_operational": 1, 00:24:59.362 "base_bdevs_list": [ 00:24:59.362 { 00:24:59.362 "name": null, 00:24:59.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.362 "is_configured": false, 00:24:59.362 "data_offset": 0, 00:24:59.362 "data_size": 7936 00:24:59.362 }, 00:24:59.362 { 00:24:59.362 "name": "BaseBdev2", 00:24:59.362 "uuid": "1db9649e-b675-4089-aefb-a650807600bd", 00:24:59.362 "is_configured": true, 00:24:59.362 "data_offset": 256, 00:24:59.362 "data_size": 7936 00:24:59.362 } 00:24:59.362 ] 00:24:59.362 }' 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.362 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.622 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:59.622 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.881 [2024-11-06 09:17:58.715464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:59.881 [2024-11-06 09:17:58.715591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:59.881 [2024-11-06 09:17:58.828649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.881 [2024-11-06 09:17:58.828708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.881 [2024-11-06 09:17:58.828724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86921 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 86921 ']' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 86921 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:59.881 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86921 00:25:00.141 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:00.141 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:00.141 killing process with pid 86921 00:25:00.141 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86921' 00:25:00.141 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 86921 00:25:00.141 [2024-11-06 09:17:58.928582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.141 09:17:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 86921 00:25:00.141 [2024-11-06 09:17:58.947621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.076 09:18:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:25:01.076 00:25:01.076 real 0m5.120s 00:25:01.076 user 0m7.288s 00:25:01.076 sys 0m1.027s 00:25:01.076 09:18:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:01.076 09:18:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 ************************************ 00:25:01.076 END TEST raid_state_function_test_sb_md_separate 00:25:01.076 ************************************ 00:25:01.335 09:18:00 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:25:01.335 09:18:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:01.335 09:18:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:01.335 09:18:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:01.335 ************************************ 00:25:01.335 START TEST raid_superblock_test_md_separate 00:25:01.335 ************************************ 00:25:01.335 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:25:01.335 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:01.335 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87172 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87172 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87172 ']' 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:01.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:01.336 09:18:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.336 [2024-11-06 09:18:00.263727] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:25:01.336 [2024-11-06 09:18:00.263859] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87172 ] 00:25:01.595 [2024-11-06 09:18:00.443553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.595 [2024-11-06 09:18:00.567638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.855 [2024-11-06 09:18:00.786332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.855 [2024-11-06 09:18:00.786404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.114 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.373 malloc1 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.373 [2024-11-06 09:18:01.202810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:02.373 [2024-11-06 09:18:01.202871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.373 [2024-11-06 09:18:01.202895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:02.373 [2024-11-06 09:18:01.202908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.373 [2024-11-06 09:18:01.205171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.373 [2024-11-06 09:18:01.205209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:02.373 pt1 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:02.373 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.374 malloc2 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.374 [2024-11-06 09:18:01.260883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:02.374 [2024-11-06 09:18:01.260940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.374 [2024-11-06 09:18:01.260965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:02.374 [2024-11-06 09:18:01.260977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.374 [2024-11-06 09:18:01.263127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.374 [2024-11-06 09:18:01.263164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:02.374 pt2 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.374 [2024-11-06 09:18:01.272903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:02.374 [2024-11-06 09:18:01.275000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:02.374 [2024-11-06 09:18:01.275182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:02.374 [2024-11-06 09:18:01.275199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:02.374 [2024-11-06 09:18:01.275304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:02.374 [2024-11-06 09:18:01.275436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:02.374 [2024-11-06 09:18:01.275451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:02.374 [2024-11-06 09:18:01.275577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.374 "name": "raid_bdev1", 00:25:02.374 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:02.374 "strip_size_kb": 0, 00:25:02.374 "state": "online", 00:25:02.374 "raid_level": "raid1", 00:25:02.374 "superblock": true, 00:25:02.374 "num_base_bdevs": 2, 00:25:02.374 "num_base_bdevs_discovered": 2, 00:25:02.374 "num_base_bdevs_operational": 2, 00:25:02.374 "base_bdevs_list": [ 00:25:02.374 { 00:25:02.374 "name": "pt1", 00:25:02.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.374 "is_configured": true, 00:25:02.374 "data_offset": 256, 00:25:02.374 "data_size": 7936 00:25:02.374 }, 00:25:02.374 { 00:25:02.374 "name": "pt2", 00:25:02.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.374 "is_configured": true, 00:25:02.374 "data_offset": 256, 00:25:02.374 "data_size": 7936 00:25:02.374 } 00:25:02.374 ] 00:25:02.374 }' 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.374 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:02.943 [2024-11-06 09:18:01.712631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:02.943 "name": "raid_bdev1", 00:25:02.943 "aliases": [ 00:25:02.943 "eb623660-10f7-46bc-82bc-8a7fa5a6a524" 00:25:02.943 ], 00:25:02.943 "product_name": "Raid Volume", 00:25:02.943 "block_size": 4096, 00:25:02.943 "num_blocks": 7936, 00:25:02.943 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:02.943 "md_size": 32, 00:25:02.943 "md_interleave": false, 00:25:02.943 "dif_type": 0, 00:25:02.943 "assigned_rate_limits": { 00:25:02.943 "rw_ios_per_sec": 0, 00:25:02.943 "rw_mbytes_per_sec": 0, 00:25:02.943 "r_mbytes_per_sec": 0, 00:25:02.943 "w_mbytes_per_sec": 0 00:25:02.943 }, 00:25:02.943 "claimed": false, 00:25:02.943 "zoned": false, 00:25:02.943 "supported_io_types": { 00:25:02.943 "read": true, 00:25:02.943 "write": true, 00:25:02.943 "unmap": false, 00:25:02.943 "flush": false, 00:25:02.943 "reset": true, 00:25:02.943 "nvme_admin": false, 00:25:02.943 "nvme_io": false, 00:25:02.943 "nvme_io_md": false, 00:25:02.943 "write_zeroes": true, 00:25:02.943 "zcopy": false, 00:25:02.943 "get_zone_info": false, 00:25:02.943 "zone_management": false, 00:25:02.943 "zone_append": false, 00:25:02.943 "compare": false, 00:25:02.943 "compare_and_write": false, 00:25:02.943 "abort": false, 00:25:02.943 "seek_hole": false, 00:25:02.943 "seek_data": false, 00:25:02.943 "copy": false, 00:25:02.943 "nvme_iov_md": false 00:25:02.943 }, 00:25:02.943 "memory_domains": [ 00:25:02.943 { 00:25:02.943 "dma_device_id": "system", 00:25:02.943 "dma_device_type": 1 00:25:02.943 }, 00:25:02.943 { 00:25:02.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.943 "dma_device_type": 2 00:25:02.943 }, 00:25:02.943 { 00:25:02.943 "dma_device_id": "system", 00:25:02.943 "dma_device_type": 1 00:25:02.943 }, 00:25:02.943 { 00:25:02.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.943 "dma_device_type": 2 00:25:02.943 } 00:25:02.943 ], 00:25:02.943 "driver_specific": { 00:25:02.943 "raid": { 00:25:02.943 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:02.943 "strip_size_kb": 0, 00:25:02.943 "state": "online", 00:25:02.943 "raid_level": "raid1", 00:25:02.943 "superblock": true, 00:25:02.943 "num_base_bdevs": 2, 00:25:02.943 "num_base_bdevs_discovered": 2, 00:25:02.943 "num_base_bdevs_operational": 2, 00:25:02.943 "base_bdevs_list": [ 00:25:02.943 { 00:25:02.943 "name": "pt1", 00:25:02.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.943 "is_configured": true, 00:25:02.943 "data_offset": 256, 00:25:02.943 "data_size": 7936 00:25:02.943 }, 00:25:02.943 { 00:25:02.943 "name": "pt2", 00:25:02.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.943 "is_configured": true, 00:25:02.943 "data_offset": 256, 00:25:02.943 "data_size": 7936 00:25:02.943 } 00:25:02.943 ] 00:25:02.943 } 00:25:02.943 } 00:25:02.943 }' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:02.943 pt2' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.943 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.944 [2024-11-06 09:18:01.940438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eb623660-10f7-46bc-82bc-8a7fa5a6a524 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z eb623660-10f7-46bc-82bc-8a7fa5a6a524 ']' 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.944 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 [2024-11-06 09:18:01.983934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.204 [2024-11-06 09:18:01.983970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.204 [2024-11-06 09:18:01.984075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.204 [2024-11-06 09:18:01.984149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.204 [2024-11-06 09:18:01.984164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:03.204 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.204 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.204 09:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:03.204 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.204 09:18:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 [2024-11-06 09:18:02.111780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:03.204 [2024-11-06 09:18:02.114027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:03.204 [2024-11-06 09:18:02.114138] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:03.204 [2024-11-06 09:18:02.114212] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:03.204 [2024-11-06 09:18:02.114231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.204 [2024-11-06 09:18:02.114245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:03.204 request: 00:25:03.204 { 00:25:03.204 "name": "raid_bdev1", 00:25:03.204 "raid_level": "raid1", 00:25:03.204 "base_bdevs": [ 00:25:03.204 "malloc1", 00:25:03.204 "malloc2" 00:25:03.204 ], 00:25:03.204 "superblock": false, 00:25:03.204 "method": "bdev_raid_create", 00:25:03.204 "req_id": 1 00:25:03.204 } 00:25:03.204 Got JSON-RPC error response 00:25:03.204 response: 00:25:03.204 { 00:25:03.204 "code": -17, 00:25:03.204 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:03.204 } 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:03.204 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.205 [2024-11-06 09:18:02.175658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:03.205 [2024-11-06 09:18:02.175731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.205 [2024-11-06 09:18:02.175753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:03.205 [2024-11-06 09:18:02.175768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.205 [2024-11-06 09:18:02.177991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.205 [2024-11-06 09:18:02.178033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:03.205 [2024-11-06 09:18:02.178092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:03.205 [2024-11-06 09:18:02.178155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:03.205 pt1 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.205 "name": "raid_bdev1", 00:25:03.205 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:03.205 "strip_size_kb": 0, 00:25:03.205 "state": "configuring", 00:25:03.205 "raid_level": "raid1", 00:25:03.205 "superblock": true, 00:25:03.205 "num_base_bdevs": 2, 00:25:03.205 "num_base_bdevs_discovered": 1, 00:25:03.205 "num_base_bdevs_operational": 2, 00:25:03.205 "base_bdevs_list": [ 00:25:03.205 { 00:25:03.205 "name": "pt1", 00:25:03.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.205 "is_configured": true, 00:25:03.205 "data_offset": 256, 00:25:03.205 "data_size": 7936 00:25:03.205 }, 00:25:03.205 { 00:25:03.205 "name": null, 00:25:03.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.205 "is_configured": false, 00:25:03.205 "data_offset": 256, 00:25:03.205 "data_size": 7936 00:25:03.205 } 00:25:03.205 ] 00:25:03.205 }' 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.205 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.774 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.775 [2024-11-06 09:18:02.607033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:03.775 [2024-11-06 09:18:02.607116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.775 [2024-11-06 09:18:02.607141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:03.775 [2024-11-06 09:18:02.607155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.775 [2024-11-06 09:18:02.607410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.775 [2024-11-06 09:18:02.607430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:03.775 [2024-11-06 09:18:02.607486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:03.775 [2024-11-06 09:18:02.607510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:03.775 [2024-11-06 09:18:02.607630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:03.775 [2024-11-06 09:18:02.607643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:03.775 [2024-11-06 09:18:02.607708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:03.775 [2024-11-06 09:18:02.607809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:03.775 [2024-11-06 09:18:02.607819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:03.775 [2024-11-06 09:18:02.607923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.775 pt2 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.775 "name": "raid_bdev1", 00:25:03.775 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:03.775 "strip_size_kb": 0, 00:25:03.775 "state": "online", 00:25:03.775 "raid_level": "raid1", 00:25:03.775 "superblock": true, 00:25:03.775 "num_base_bdevs": 2, 00:25:03.775 "num_base_bdevs_discovered": 2, 00:25:03.775 "num_base_bdevs_operational": 2, 00:25:03.775 "base_bdevs_list": [ 00:25:03.775 { 00:25:03.775 "name": "pt1", 00:25:03.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.775 "is_configured": true, 00:25:03.775 "data_offset": 256, 00:25:03.775 "data_size": 7936 00:25:03.775 }, 00:25:03.775 { 00:25:03.775 "name": "pt2", 00:25:03.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.775 "is_configured": true, 00:25:03.775 "data_offset": 256, 00:25:03.775 "data_size": 7936 00:25:03.775 } 00:25:03.775 ] 00:25:03.775 }' 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.775 09:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.035 [2024-11-06 09:18:03.034745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.035 "name": "raid_bdev1", 00:25:04.035 "aliases": [ 00:25:04.035 "eb623660-10f7-46bc-82bc-8a7fa5a6a524" 00:25:04.035 ], 00:25:04.035 "product_name": "Raid Volume", 00:25:04.035 "block_size": 4096, 00:25:04.035 "num_blocks": 7936, 00:25:04.035 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:04.035 "md_size": 32, 00:25:04.035 "md_interleave": false, 00:25:04.035 "dif_type": 0, 00:25:04.035 "assigned_rate_limits": { 00:25:04.035 "rw_ios_per_sec": 0, 00:25:04.035 "rw_mbytes_per_sec": 0, 00:25:04.035 "r_mbytes_per_sec": 0, 00:25:04.035 "w_mbytes_per_sec": 0 00:25:04.035 }, 00:25:04.035 "claimed": false, 00:25:04.035 "zoned": false, 00:25:04.035 "supported_io_types": { 00:25:04.035 "read": true, 00:25:04.035 "write": true, 00:25:04.035 "unmap": false, 00:25:04.035 "flush": false, 00:25:04.035 "reset": true, 00:25:04.035 "nvme_admin": false, 00:25:04.035 "nvme_io": false, 00:25:04.035 "nvme_io_md": false, 00:25:04.035 "write_zeroes": true, 00:25:04.035 "zcopy": false, 00:25:04.035 "get_zone_info": false, 00:25:04.035 "zone_management": false, 00:25:04.035 "zone_append": false, 00:25:04.035 "compare": false, 00:25:04.035 "compare_and_write": false, 00:25:04.035 "abort": false, 00:25:04.035 "seek_hole": false, 00:25:04.035 "seek_data": false, 00:25:04.035 "copy": false, 00:25:04.035 "nvme_iov_md": false 00:25:04.035 }, 00:25:04.035 "memory_domains": [ 00:25:04.035 { 00:25:04.035 "dma_device_id": "system", 00:25:04.035 "dma_device_type": 1 00:25:04.035 }, 00:25:04.035 { 00:25:04.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.035 "dma_device_type": 2 00:25:04.035 }, 00:25:04.035 { 00:25:04.035 "dma_device_id": "system", 00:25:04.035 "dma_device_type": 1 00:25:04.035 }, 00:25:04.035 { 00:25:04.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.035 "dma_device_type": 2 00:25:04.035 } 00:25:04.035 ], 00:25:04.035 "driver_specific": { 00:25:04.035 "raid": { 00:25:04.035 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:04.035 "strip_size_kb": 0, 00:25:04.035 "state": "online", 00:25:04.035 "raid_level": "raid1", 00:25:04.035 "superblock": true, 00:25:04.035 "num_base_bdevs": 2, 00:25:04.035 "num_base_bdevs_discovered": 2, 00:25:04.035 "num_base_bdevs_operational": 2, 00:25:04.035 "base_bdevs_list": [ 00:25:04.035 { 00:25:04.035 "name": "pt1", 00:25:04.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:04.035 "is_configured": true, 00:25:04.035 "data_offset": 256, 00:25:04.035 "data_size": 7936 00:25:04.035 }, 00:25:04.035 { 00:25:04.035 "name": "pt2", 00:25:04.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:04.035 "is_configured": true, 00:25:04.035 "data_offset": 256, 00:25:04.035 "data_size": 7936 00:25:04.035 } 00:25:04.035 ] 00:25:04.035 } 00:25:04.035 } 00:25:04.035 }' 00:25:04.035 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:04.296 pt2' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.296 [2024-11-06 09:18:03.278666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' eb623660-10f7-46bc-82bc-8a7fa5a6a524 '!=' eb623660-10f7-46bc-82bc-8a7fa5a6a524 ']' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.296 [2024-11-06 09:18:03.318411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.296 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.586 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.586 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.586 "name": "raid_bdev1", 00:25:04.586 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:04.586 "strip_size_kb": 0, 00:25:04.586 "state": "online", 00:25:04.586 "raid_level": "raid1", 00:25:04.586 "superblock": true, 00:25:04.586 "num_base_bdevs": 2, 00:25:04.586 "num_base_bdevs_discovered": 1, 00:25:04.586 "num_base_bdevs_operational": 1, 00:25:04.586 "base_bdevs_list": [ 00:25:04.586 { 00:25:04.586 "name": null, 00:25:04.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.586 "is_configured": false, 00:25:04.586 "data_offset": 0, 00:25:04.586 "data_size": 7936 00:25:04.586 }, 00:25:04.586 { 00:25:04.586 "name": "pt2", 00:25:04.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:04.586 "is_configured": true, 00:25:04.586 "data_offset": 256, 00:25:04.586 "data_size": 7936 00:25:04.586 } 00:25:04.586 ] 00:25:04.586 }' 00:25:04.586 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.586 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.846 [2024-11-06 09:18:03.722375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:04.846 [2024-11-06 09:18:03.722411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.846 [2024-11-06 09:18:03.722495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.846 [2024-11-06 09:18:03.722551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.846 [2024-11-06 09:18:03.722566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:04.846 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.847 [2024-11-06 09:18:03.790400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:04.847 [2024-11-06 09:18:03.790485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.847 [2024-11-06 09:18:03.790508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:04.847 [2024-11-06 09:18:03.790524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.847 [2024-11-06 09:18:03.792852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.847 [2024-11-06 09:18:03.792894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:04.847 [2024-11-06 09:18:03.792956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:04.847 [2024-11-06 09:18:03.793006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:04.847 [2024-11-06 09:18:03.793102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:04.847 [2024-11-06 09:18:03.793117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:04.847 [2024-11-06 09:18:03.793190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:04.847 [2024-11-06 09:18:03.793316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:04.847 [2024-11-06 09:18:03.793326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:04.847 [2024-11-06 09:18:03.793424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.847 pt2 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.847 "name": "raid_bdev1", 00:25:04.847 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:04.847 "strip_size_kb": 0, 00:25:04.847 "state": "online", 00:25:04.847 "raid_level": "raid1", 00:25:04.847 "superblock": true, 00:25:04.847 "num_base_bdevs": 2, 00:25:04.847 "num_base_bdevs_discovered": 1, 00:25:04.847 "num_base_bdevs_operational": 1, 00:25:04.847 "base_bdevs_list": [ 00:25:04.847 { 00:25:04.847 "name": null, 00:25:04.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.847 "is_configured": false, 00:25:04.847 "data_offset": 256, 00:25:04.847 "data_size": 7936 00:25:04.847 }, 00:25:04.847 { 00:25:04.847 "name": "pt2", 00:25:04.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:04.847 "is_configured": true, 00:25:04.847 "data_offset": 256, 00:25:04.847 "data_size": 7936 00:25:04.847 } 00:25:04.847 ] 00:25:04.847 }' 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.847 09:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.416 [2024-11-06 09:18:04.206371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.416 [2024-11-06 09:18:04.206411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:05.416 [2024-11-06 09:18:04.206512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.416 [2024-11-06 09:18:04.206572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.416 [2024-11-06 09:18:04.206584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.416 [2024-11-06 09:18:04.266396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:05.416 [2024-11-06 09:18:04.266466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.416 [2024-11-06 09:18:04.266491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:05.416 [2024-11-06 09:18:04.266504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.416 [2024-11-06 09:18:04.268941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.416 [2024-11-06 09:18:04.268981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:05.416 [2024-11-06 09:18:04.269051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:05.416 [2024-11-06 09:18:04.269102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:05.416 [2024-11-06 09:18:04.269253] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:05.416 [2024-11-06 09:18:04.269270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.416 [2024-11-06 09:18:04.269308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:05.416 [2024-11-06 09:18:04.269379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:05.416 [2024-11-06 09:18:04.269461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:05.416 [2024-11-06 09:18:04.269470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:05.416 [2024-11-06 09:18:04.269545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:05.416 [2024-11-06 09:18:04.269637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:05.416 [2024-11-06 09:18:04.269649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:05.416 [2024-11-06 09:18:04.269761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.416 pt1 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.416 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.416 "name": "raid_bdev1", 00:25:05.416 "uuid": "eb623660-10f7-46bc-82bc-8a7fa5a6a524", 00:25:05.416 "strip_size_kb": 0, 00:25:05.416 "state": "online", 00:25:05.416 "raid_level": "raid1", 00:25:05.416 "superblock": true, 00:25:05.416 "num_base_bdevs": 2, 00:25:05.417 "num_base_bdevs_discovered": 1, 00:25:05.417 "num_base_bdevs_operational": 1, 00:25:05.417 "base_bdevs_list": [ 00:25:05.417 { 00:25:05.417 "name": null, 00:25:05.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.417 "is_configured": false, 00:25:05.417 "data_offset": 256, 00:25:05.417 "data_size": 7936 00:25:05.417 }, 00:25:05.417 { 00:25:05.417 "name": "pt2", 00:25:05.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:05.417 "is_configured": true, 00:25:05.417 "data_offset": 256, 00:25:05.417 "data_size": 7936 00:25:05.417 } 00:25:05.417 ] 00:25:05.417 }' 00:25:05.417 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.417 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:05.677 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 [2024-11-06 09:18:04.686644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' eb623660-10f7-46bc-82bc-8a7fa5a6a524 '!=' eb623660-10f7-46bc-82bc-8a7fa5a6a524 ']' 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87172 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87172 ']' 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87172 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87172 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:05.936 killing process with pid 87172 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87172' 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87172 00:25:05.936 [2024-11-06 09:18:04.771040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:05.936 [2024-11-06 09:18:04.771141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.936 09:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87172 00:25:05.936 [2024-11-06 09:18:04.771197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.936 [2024-11-06 09:18:04.771219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:06.195 [2024-11-06 09:18:05.000975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:07.132 09:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:25:07.132 00:25:07.132 real 0m5.999s 00:25:07.132 user 0m8.982s 00:25:07.132 sys 0m1.235s 00:25:07.132 09:18:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:07.132 09:18:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.132 ************************************ 00:25:07.132 END TEST raid_superblock_test_md_separate 00:25:07.132 ************************************ 00:25:07.391 09:18:06 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:25:07.391 09:18:06 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:25:07.391 09:18:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:07.391 09:18:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:07.391 09:18:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:07.391 ************************************ 00:25:07.391 START TEST raid_rebuild_test_sb_md_separate 00:25:07.391 ************************************ 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87493 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87493 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87493 ']' 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:07.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:07.391 09:18:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.391 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:07.391 Zero copy mechanism will not be used. 00:25:07.391 [2024-11-06 09:18:06.355289] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:25:07.392 [2024-11-06 09:18:06.355420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87493 ] 00:25:07.649 [2024-11-06 09:18:06.542226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.649 [2024-11-06 09:18:06.662588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.908 [2024-11-06 09:18:06.863727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.908 [2024-11-06 09:18:06.863806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 BaseBdev1_malloc 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 [2024-11-06 09:18:07.264867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:08.477 [2024-11-06 09:18:07.264942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.477 [2024-11-06 09:18:07.264971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:08.477 [2024-11-06 09:18:07.264986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.477 [2024-11-06 09:18:07.267376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.477 [2024-11-06 09:18:07.267418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:08.477 BaseBdev1 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 BaseBdev2_malloc 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 [2024-11-06 09:18:07.324773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:08.477 [2024-11-06 09:18:07.324859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.477 [2024-11-06 09:18:07.324885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:08.477 [2024-11-06 09:18:07.324899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.477 [2024-11-06 09:18:07.327319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.477 [2024-11-06 09:18:07.327366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:08.477 BaseBdev2 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 spare_malloc 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 spare_delay 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 [2024-11-06 09:18:07.411181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:08.477 [2024-11-06 09:18:07.411258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.477 [2024-11-06 09:18:07.411308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:08.477 [2024-11-06 09:18:07.411325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.477 [2024-11-06 09:18:07.413740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.477 [2024-11-06 09:18:07.413789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:08.477 spare 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.477 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.477 [2024-11-06 09:18:07.423226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.477 [2024-11-06 09:18:07.425455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.478 [2024-11-06 09:18:07.425686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:08.478 [2024-11-06 09:18:07.425715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:08.478 [2024-11-06 09:18:07.425826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:08.478 [2024-11-06 09:18:07.425968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:08.478 [2024-11-06 09:18:07.425977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:08.478 [2024-11-06 09:18:07.426093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.478 "name": "raid_bdev1", 00:25:08.478 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:08.478 "strip_size_kb": 0, 00:25:08.478 "state": "online", 00:25:08.478 "raid_level": "raid1", 00:25:08.478 "superblock": true, 00:25:08.478 "num_base_bdevs": 2, 00:25:08.478 "num_base_bdevs_discovered": 2, 00:25:08.478 "num_base_bdevs_operational": 2, 00:25:08.478 "base_bdevs_list": [ 00:25:08.478 { 00:25:08.478 "name": "BaseBdev1", 00:25:08.478 "uuid": "b7b94585-76b3-5777-b3e1-ae200822afba", 00:25:08.478 "is_configured": true, 00:25:08.478 "data_offset": 256, 00:25:08.478 "data_size": 7936 00:25:08.478 }, 00:25:08.478 { 00:25:08.478 "name": "BaseBdev2", 00:25:08.478 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:08.478 "is_configured": true, 00:25:08.478 "data_offset": 256, 00:25:08.478 "data_size": 7936 00:25:08.478 } 00:25:08.478 ] 00:25:08.478 }' 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.478 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:09.046 [2024-11-06 09:18:07.870864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:09.046 09:18:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:09.304 [2024-11-06 09:18:08.154474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:09.304 /dev/nbd0 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:09.304 1+0 records in 00:25:09.304 1+0 records out 00:25:09.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427302 s, 9.6 MB/s 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:09.304 09:18:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:10.239 7936+0 records in 00:25:10.239 7936+0 records out 00:25:10.239 32505856 bytes (33 MB, 31 MiB) copied, 0.77726 s, 41.8 MB/s 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:10.239 [2024-11-06 09:18:09.239063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.239 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.239 [2024-11-06 09:18:09.275228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.497 "name": "raid_bdev1", 00:25:10.497 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:10.497 "strip_size_kb": 0, 00:25:10.497 "state": "online", 00:25:10.497 "raid_level": "raid1", 00:25:10.497 "superblock": true, 00:25:10.497 "num_base_bdevs": 2, 00:25:10.497 "num_base_bdevs_discovered": 1, 00:25:10.497 "num_base_bdevs_operational": 1, 00:25:10.497 "base_bdevs_list": [ 00:25:10.497 { 00:25:10.497 "name": null, 00:25:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.497 "is_configured": false, 00:25:10.497 "data_offset": 0, 00:25:10.497 "data_size": 7936 00:25:10.497 }, 00:25:10.497 { 00:25:10.497 "name": "BaseBdev2", 00:25:10.497 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:10.497 "is_configured": true, 00:25:10.497 "data_offset": 256, 00:25:10.497 "data_size": 7936 00:25:10.497 } 00:25:10.497 ] 00:25:10.497 }' 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.497 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.755 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:10.755 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.755 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.755 [2024-11-06 09:18:09.730569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.755 [2024-11-06 09:18:09.746259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:25:10.755 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.755 09:18:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:10.755 [2024-11-06 09:18:09.748546] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:12.133 "name": "raid_bdev1", 00:25:12.133 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:12.133 "strip_size_kb": 0, 00:25:12.133 "state": "online", 00:25:12.133 "raid_level": "raid1", 00:25:12.133 "superblock": true, 00:25:12.133 "num_base_bdevs": 2, 00:25:12.133 "num_base_bdevs_discovered": 2, 00:25:12.133 "num_base_bdevs_operational": 2, 00:25:12.133 "process": { 00:25:12.133 "type": "rebuild", 00:25:12.133 "target": "spare", 00:25:12.133 "progress": { 00:25:12.133 "blocks": 2560, 00:25:12.133 "percent": 32 00:25:12.133 } 00:25:12.133 }, 00:25:12.133 "base_bdevs_list": [ 00:25:12.133 { 00:25:12.133 "name": "spare", 00:25:12.133 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:12.133 "is_configured": true, 00:25:12.133 "data_offset": 256, 00:25:12.133 "data_size": 7936 00:25:12.133 }, 00:25:12.133 { 00:25:12.133 "name": "BaseBdev2", 00:25:12.133 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:12.133 "is_configured": true, 00:25:12.133 "data_offset": 256, 00:25:12.133 "data_size": 7936 00:25:12.133 } 00:25:12.133 ] 00:25:12.133 }' 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.133 [2024-11-06 09:18:10.876518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.133 [2024-11-06 09:18:10.954996] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:12.133 [2024-11-06 09:18:10.955096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.133 [2024-11-06 09:18:10.955114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.133 [2024-11-06 09:18:10.955126] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.133 09:18:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.133 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.133 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.133 "name": "raid_bdev1", 00:25:12.133 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:12.133 "strip_size_kb": 0, 00:25:12.133 "state": "online", 00:25:12.133 "raid_level": "raid1", 00:25:12.133 "superblock": true, 00:25:12.133 "num_base_bdevs": 2, 00:25:12.133 "num_base_bdevs_discovered": 1, 00:25:12.133 "num_base_bdevs_operational": 1, 00:25:12.133 "base_bdevs_list": [ 00:25:12.133 { 00:25:12.133 "name": null, 00:25:12.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.133 "is_configured": false, 00:25:12.133 "data_offset": 0, 00:25:12.133 "data_size": 7936 00:25:12.133 }, 00:25:12.133 { 00:25:12.133 "name": "BaseBdev2", 00:25:12.133 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:12.133 "is_configured": true, 00:25:12.133 "data_offset": 256, 00:25:12.133 "data_size": 7936 00:25:12.133 } 00:25:12.133 ] 00:25:12.133 }' 00:25:12.133 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.133 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.393 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:12.653 "name": "raid_bdev1", 00:25:12.653 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:12.653 "strip_size_kb": 0, 00:25:12.653 "state": "online", 00:25:12.653 "raid_level": "raid1", 00:25:12.653 "superblock": true, 00:25:12.653 "num_base_bdevs": 2, 00:25:12.653 "num_base_bdevs_discovered": 1, 00:25:12.653 "num_base_bdevs_operational": 1, 00:25:12.653 "base_bdevs_list": [ 00:25:12.653 { 00:25:12.653 "name": null, 00:25:12.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.653 "is_configured": false, 00:25:12.653 "data_offset": 0, 00:25:12.653 "data_size": 7936 00:25:12.653 }, 00:25:12.653 { 00:25:12.653 "name": "BaseBdev2", 00:25:12.653 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:12.653 "is_configured": true, 00:25:12.653 "data_offset": 256, 00:25:12.653 "data_size": 7936 00:25:12.653 } 00:25:12.653 ] 00:25:12.653 }' 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.653 [2024-11-06 09:18:11.563035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:12.653 [2024-11-06 09:18:11.578086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.653 09:18:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:12.653 [2024-11-06 09:18:11.580574] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.591 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.850 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:13.850 "name": "raid_bdev1", 00:25:13.850 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:13.850 "strip_size_kb": 0, 00:25:13.850 "state": "online", 00:25:13.850 "raid_level": "raid1", 00:25:13.850 "superblock": true, 00:25:13.850 "num_base_bdevs": 2, 00:25:13.850 "num_base_bdevs_discovered": 2, 00:25:13.851 "num_base_bdevs_operational": 2, 00:25:13.851 "process": { 00:25:13.851 "type": "rebuild", 00:25:13.851 "target": "spare", 00:25:13.851 "progress": { 00:25:13.851 "blocks": 2560, 00:25:13.851 "percent": 32 00:25:13.851 } 00:25:13.851 }, 00:25:13.851 "base_bdevs_list": [ 00:25:13.851 { 00:25:13.851 "name": "spare", 00:25:13.851 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:13.851 "is_configured": true, 00:25:13.851 "data_offset": 256, 00:25:13.851 "data_size": 7936 00:25:13.851 }, 00:25:13.851 { 00:25:13.851 "name": "BaseBdev2", 00:25:13.851 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:13.851 "is_configured": true, 00:25:13.851 "data_offset": 256, 00:25:13.851 "data_size": 7936 00:25:13.851 } 00:25:13.851 ] 00:25:13.851 }' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:13.851 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=707 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:13.851 "name": "raid_bdev1", 00:25:13.851 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:13.851 "strip_size_kb": 0, 00:25:13.851 "state": "online", 00:25:13.851 "raid_level": "raid1", 00:25:13.851 "superblock": true, 00:25:13.851 "num_base_bdevs": 2, 00:25:13.851 "num_base_bdevs_discovered": 2, 00:25:13.851 "num_base_bdevs_operational": 2, 00:25:13.851 "process": { 00:25:13.851 "type": "rebuild", 00:25:13.851 "target": "spare", 00:25:13.851 "progress": { 00:25:13.851 "blocks": 2816, 00:25:13.851 "percent": 35 00:25:13.851 } 00:25:13.851 }, 00:25:13.851 "base_bdevs_list": [ 00:25:13.851 { 00:25:13.851 "name": "spare", 00:25:13.851 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:13.851 "is_configured": true, 00:25:13.851 "data_offset": 256, 00:25:13.851 "data_size": 7936 00:25:13.851 }, 00:25:13.851 { 00:25:13.851 "name": "BaseBdev2", 00:25:13.851 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:13.851 "is_configured": true, 00:25:13.851 "data_offset": 256, 00:25:13.851 "data_size": 7936 00:25:13.851 } 00:25:13.851 ] 00:25:13.851 }' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.851 09:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:15.240 "name": "raid_bdev1", 00:25:15.240 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:15.240 "strip_size_kb": 0, 00:25:15.240 "state": "online", 00:25:15.240 "raid_level": "raid1", 00:25:15.240 "superblock": true, 00:25:15.240 "num_base_bdevs": 2, 00:25:15.240 "num_base_bdevs_discovered": 2, 00:25:15.240 "num_base_bdevs_operational": 2, 00:25:15.240 "process": { 00:25:15.240 "type": "rebuild", 00:25:15.240 "target": "spare", 00:25:15.240 "progress": { 00:25:15.240 "blocks": 5632, 00:25:15.240 "percent": 70 00:25:15.240 } 00:25:15.240 }, 00:25:15.240 "base_bdevs_list": [ 00:25:15.240 { 00:25:15.240 "name": "spare", 00:25:15.240 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:15.240 "is_configured": true, 00:25:15.240 "data_offset": 256, 00:25:15.240 "data_size": 7936 00:25:15.240 }, 00:25:15.240 { 00:25:15.240 "name": "BaseBdev2", 00:25:15.240 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:15.240 "is_configured": true, 00:25:15.240 "data_offset": 256, 00:25:15.240 "data_size": 7936 00:25:15.240 } 00:25:15.240 ] 00:25:15.240 }' 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.240 09:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:15.240 09:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.240 09:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:15.807 [2024-11-06 09:18:14.696695] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:15.807 [2024-11-06 09:18:14.696792] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:15.807 [2024-11-06 09:18:14.696925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:16.065 "name": "raid_bdev1", 00:25:16.065 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:16.065 "strip_size_kb": 0, 00:25:16.065 "state": "online", 00:25:16.065 "raid_level": "raid1", 00:25:16.065 "superblock": true, 00:25:16.065 "num_base_bdevs": 2, 00:25:16.065 "num_base_bdevs_discovered": 2, 00:25:16.065 "num_base_bdevs_operational": 2, 00:25:16.065 "base_bdevs_list": [ 00:25:16.065 { 00:25:16.065 "name": "spare", 00:25:16.065 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:16.065 "is_configured": true, 00:25:16.065 "data_offset": 256, 00:25:16.065 "data_size": 7936 00:25:16.065 }, 00:25:16.065 { 00:25:16.065 "name": "BaseBdev2", 00:25:16.065 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:16.065 "is_configured": true, 00:25:16.065 "data_offset": 256, 00:25:16.065 "data_size": 7936 00:25:16.065 } 00:25:16.065 ] 00:25:16.065 }' 00:25:16.065 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:16.324 "name": "raid_bdev1", 00:25:16.324 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:16.324 "strip_size_kb": 0, 00:25:16.324 "state": "online", 00:25:16.324 "raid_level": "raid1", 00:25:16.324 "superblock": true, 00:25:16.324 "num_base_bdevs": 2, 00:25:16.324 "num_base_bdevs_discovered": 2, 00:25:16.324 "num_base_bdevs_operational": 2, 00:25:16.324 "base_bdevs_list": [ 00:25:16.324 { 00:25:16.324 "name": "spare", 00:25:16.324 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:16.324 "is_configured": true, 00:25:16.324 "data_offset": 256, 00:25:16.324 "data_size": 7936 00:25:16.324 }, 00:25:16.324 { 00:25:16.324 "name": "BaseBdev2", 00:25:16.324 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:16.324 "is_configured": true, 00:25:16.324 "data_offset": 256, 00:25:16.324 "data_size": 7936 00:25:16.324 } 00:25:16.324 ] 00:25:16.324 }' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.324 "name": "raid_bdev1", 00:25:16.324 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:16.324 "strip_size_kb": 0, 00:25:16.324 "state": "online", 00:25:16.324 "raid_level": "raid1", 00:25:16.324 "superblock": true, 00:25:16.324 "num_base_bdevs": 2, 00:25:16.324 "num_base_bdevs_discovered": 2, 00:25:16.324 "num_base_bdevs_operational": 2, 00:25:16.324 "base_bdevs_list": [ 00:25:16.324 { 00:25:16.324 "name": "spare", 00:25:16.324 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:16.324 "is_configured": true, 00:25:16.324 "data_offset": 256, 00:25:16.324 "data_size": 7936 00:25:16.324 }, 00:25:16.324 { 00:25:16.324 "name": "BaseBdev2", 00:25:16.324 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:16.324 "is_configured": true, 00:25:16.324 "data_offset": 256, 00:25:16.324 "data_size": 7936 00:25:16.324 } 00:25:16.324 ] 00:25:16.324 }' 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.324 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.891 [2024-11-06 09:18:15.671982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.891 [2024-11-06 09:18:15.672025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.891 [2024-11-06 09:18:15.672152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.891 [2024-11-06 09:18:15.672233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.891 [2024-11-06 09:18:15.672254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:16.891 09:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:17.164 /dev/nbd0 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.164 1+0 records in 00:25:17.164 1+0 records out 00:25:17.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041511 s, 9.9 MB/s 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:17.164 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:17.423 /dev/nbd1 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:17.423 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.424 1+0 records in 00:25:17.424 1+0 records out 00:25:17.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793763 s, 5.2 MB/s 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:17.424 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:17.682 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:17.941 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:18.199 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:18.199 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:18.199 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:18.199 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:18.199 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:18.199 09:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.199 [2024-11-06 09:18:17.020241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:18.199 [2024-11-06 09:18:17.020323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.199 [2024-11-06 09:18:17.020350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:18.199 [2024-11-06 09:18:17.020362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.199 [2024-11-06 09:18:17.022819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.199 [2024-11-06 09:18:17.022864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:18.199 [2024-11-06 09:18:17.022947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:18.199 [2024-11-06 09:18:17.023011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:18.199 [2024-11-06 09:18:17.023192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.199 spare 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.199 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.199 [2024-11-06 09:18:17.123135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:18.199 [2024-11-06 09:18:17.123210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:18.199 [2024-11-06 09:18:17.123380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:25:18.199 [2024-11-06 09:18:17.123559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:18.199 [2024-11-06 09:18:17.123573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:18.200 [2024-11-06 09:18:17.123728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.200 "name": "raid_bdev1", 00:25:18.200 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:18.200 "strip_size_kb": 0, 00:25:18.200 "state": "online", 00:25:18.200 "raid_level": "raid1", 00:25:18.200 "superblock": true, 00:25:18.200 "num_base_bdevs": 2, 00:25:18.200 "num_base_bdevs_discovered": 2, 00:25:18.200 "num_base_bdevs_operational": 2, 00:25:18.200 "base_bdevs_list": [ 00:25:18.200 { 00:25:18.200 "name": "spare", 00:25:18.200 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:18.200 "is_configured": true, 00:25:18.200 "data_offset": 256, 00:25:18.200 "data_size": 7936 00:25:18.200 }, 00:25:18.200 { 00:25:18.200 "name": "BaseBdev2", 00:25:18.200 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:18.200 "is_configured": true, 00:25:18.200 "data_offset": 256, 00:25:18.200 "data_size": 7936 00:25:18.200 } 00:25:18.200 ] 00:25:18.200 }' 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.200 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.766 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:18.767 "name": "raid_bdev1", 00:25:18.767 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:18.767 "strip_size_kb": 0, 00:25:18.767 "state": "online", 00:25:18.767 "raid_level": "raid1", 00:25:18.767 "superblock": true, 00:25:18.767 "num_base_bdevs": 2, 00:25:18.767 "num_base_bdevs_discovered": 2, 00:25:18.767 "num_base_bdevs_operational": 2, 00:25:18.767 "base_bdevs_list": [ 00:25:18.767 { 00:25:18.767 "name": "spare", 00:25:18.767 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:18.767 "is_configured": true, 00:25:18.767 "data_offset": 256, 00:25:18.767 "data_size": 7936 00:25:18.767 }, 00:25:18.767 { 00:25:18.767 "name": "BaseBdev2", 00:25:18.767 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:18.767 "is_configured": true, 00:25:18.767 "data_offset": 256, 00:25:18.767 "data_size": 7936 00:25:18.767 } 00:25:18.767 ] 00:25:18.767 }' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.767 [2024-11-06 09:18:17.687350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.767 "name": "raid_bdev1", 00:25:18.767 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:18.767 "strip_size_kb": 0, 00:25:18.767 "state": "online", 00:25:18.767 "raid_level": "raid1", 00:25:18.767 "superblock": true, 00:25:18.767 "num_base_bdevs": 2, 00:25:18.767 "num_base_bdevs_discovered": 1, 00:25:18.767 "num_base_bdevs_operational": 1, 00:25:18.767 "base_bdevs_list": [ 00:25:18.767 { 00:25:18.767 "name": null, 00:25:18.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.767 "is_configured": false, 00:25:18.767 "data_offset": 0, 00:25:18.767 "data_size": 7936 00:25:18.767 }, 00:25:18.767 { 00:25:18.767 "name": "BaseBdev2", 00:25:18.767 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:18.767 "is_configured": true, 00:25:18.767 "data_offset": 256, 00:25:18.767 "data_size": 7936 00:25:18.767 } 00:25:18.767 ] 00:25:18.767 }' 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.767 09:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.334 09:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:19.334 09:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.334 09:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.334 [2024-11-06 09:18:18.082794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.334 [2024-11-06 09:18:18.083004] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:19.334 [2024-11-06 09:18:18.083024] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:19.334 [2024-11-06 09:18:18.083079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.334 [2024-11-06 09:18:18.098009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:25:19.334 09:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.334 09:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:19.334 [2024-11-06 09:18:18.100533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.270 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:20.270 "name": "raid_bdev1", 00:25:20.270 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:20.270 "strip_size_kb": 0, 00:25:20.270 "state": "online", 00:25:20.270 "raid_level": "raid1", 00:25:20.270 "superblock": true, 00:25:20.270 "num_base_bdevs": 2, 00:25:20.270 "num_base_bdevs_discovered": 2, 00:25:20.270 "num_base_bdevs_operational": 2, 00:25:20.270 "process": { 00:25:20.270 "type": "rebuild", 00:25:20.270 "target": "spare", 00:25:20.270 "progress": { 00:25:20.270 "blocks": 2560, 00:25:20.270 "percent": 32 00:25:20.270 } 00:25:20.270 }, 00:25:20.270 "base_bdevs_list": [ 00:25:20.270 { 00:25:20.270 "name": "spare", 00:25:20.270 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:20.270 "is_configured": true, 00:25:20.271 "data_offset": 256, 00:25:20.271 "data_size": 7936 00:25:20.271 }, 00:25:20.271 { 00:25:20.271 "name": "BaseBdev2", 00:25:20.271 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:20.271 "is_configured": true, 00:25:20.271 "data_offset": 256, 00:25:20.271 "data_size": 7936 00:25:20.271 } 00:25:20.271 ] 00:25:20.271 }' 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.271 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.271 [2024-11-06 09:18:19.252401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:20.271 [2024-11-06 09:18:19.306685] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:20.271 [2024-11-06 09:18:19.306790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.271 [2024-11-06 09:18:19.306809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:20.271 [2024-11-06 09:18:19.306833] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.530 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.530 "name": "raid_bdev1", 00:25:20.530 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:20.530 "strip_size_kb": 0, 00:25:20.530 "state": "online", 00:25:20.530 "raid_level": "raid1", 00:25:20.530 "superblock": true, 00:25:20.530 "num_base_bdevs": 2, 00:25:20.530 "num_base_bdevs_discovered": 1, 00:25:20.530 "num_base_bdevs_operational": 1, 00:25:20.530 "base_bdevs_list": [ 00:25:20.530 { 00:25:20.531 "name": null, 00:25:20.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.531 "is_configured": false, 00:25:20.531 "data_offset": 0, 00:25:20.531 "data_size": 7936 00:25:20.531 }, 00:25:20.531 { 00:25:20.531 "name": "BaseBdev2", 00:25:20.531 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:20.531 "is_configured": true, 00:25:20.531 "data_offset": 256, 00:25:20.531 "data_size": 7936 00:25:20.531 } 00:25:20.531 ] 00:25:20.531 }' 00:25:20.531 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.531 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.792 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:20.792 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.792 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.792 [2024-11-06 09:18:19.802646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:20.792 [2024-11-06 09:18:19.802730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.792 [2024-11-06 09:18:19.802760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:20.792 [2024-11-06 09:18:19.802775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.792 [2024-11-06 09:18:19.803051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.792 [2024-11-06 09:18:19.803073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:20.792 [2024-11-06 09:18:19.803141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:20.792 [2024-11-06 09:18:19.803158] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:20.792 [2024-11-06 09:18:19.803170] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:20.792 [2024-11-06 09:18:19.803194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:20.792 [2024-11-06 09:18:19.817741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:25:20.792 spare 00:25:20.792 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.792 09:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:20.792 [2024-11-06 09:18:19.820040] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.167 "name": "raid_bdev1", 00:25:22.167 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:22.167 "strip_size_kb": 0, 00:25:22.167 "state": "online", 00:25:22.167 "raid_level": "raid1", 00:25:22.167 "superblock": true, 00:25:22.167 "num_base_bdevs": 2, 00:25:22.167 "num_base_bdevs_discovered": 2, 00:25:22.167 "num_base_bdevs_operational": 2, 00:25:22.167 "process": { 00:25:22.167 "type": "rebuild", 00:25:22.167 "target": "spare", 00:25:22.167 "progress": { 00:25:22.167 "blocks": 2560, 00:25:22.167 "percent": 32 00:25:22.167 } 00:25:22.167 }, 00:25:22.167 "base_bdevs_list": [ 00:25:22.167 { 00:25:22.167 "name": "spare", 00:25:22.167 "uuid": "a997324c-6afe-58a1-81b9-3b5d97229cad", 00:25:22.167 "is_configured": true, 00:25:22.167 "data_offset": 256, 00:25:22.167 "data_size": 7936 00:25:22.167 }, 00:25:22.167 { 00:25:22.167 "name": "BaseBdev2", 00:25:22.167 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:22.167 "is_configured": true, 00:25:22.167 "data_offset": 256, 00:25:22.167 "data_size": 7936 00:25:22.167 } 00:25:22.167 ] 00:25:22.167 }' 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.167 09:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.167 [2024-11-06 09:18:20.968649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.167 [2024-11-06 09:18:21.026249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:22.167 [2024-11-06 09:18:21.026344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.167 [2024-11-06 09:18:21.026368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.167 [2024-11-06 09:18:21.026377] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.167 "name": "raid_bdev1", 00:25:22.167 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:22.167 "strip_size_kb": 0, 00:25:22.167 "state": "online", 00:25:22.167 "raid_level": "raid1", 00:25:22.167 "superblock": true, 00:25:22.167 "num_base_bdevs": 2, 00:25:22.167 "num_base_bdevs_discovered": 1, 00:25:22.167 "num_base_bdevs_operational": 1, 00:25:22.167 "base_bdevs_list": [ 00:25:22.167 { 00:25:22.167 "name": null, 00:25:22.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.167 "is_configured": false, 00:25:22.167 "data_offset": 0, 00:25:22.167 "data_size": 7936 00:25:22.167 }, 00:25:22.167 { 00:25:22.167 "name": "BaseBdev2", 00:25:22.167 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:22.167 "is_configured": true, 00:25:22.167 "data_offset": 256, 00:25:22.167 "data_size": 7936 00:25:22.167 } 00:25:22.167 ] 00:25:22.167 }' 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.167 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.733 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.733 "name": "raid_bdev1", 00:25:22.733 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:22.733 "strip_size_kb": 0, 00:25:22.733 "state": "online", 00:25:22.733 "raid_level": "raid1", 00:25:22.733 "superblock": true, 00:25:22.733 "num_base_bdevs": 2, 00:25:22.733 "num_base_bdevs_discovered": 1, 00:25:22.733 "num_base_bdevs_operational": 1, 00:25:22.733 "base_bdevs_list": [ 00:25:22.733 { 00:25:22.733 "name": null, 00:25:22.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.734 "is_configured": false, 00:25:22.734 "data_offset": 0, 00:25:22.734 "data_size": 7936 00:25:22.734 }, 00:25:22.734 { 00:25:22.734 "name": "BaseBdev2", 00:25:22.734 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:22.734 "is_configured": true, 00:25:22.734 "data_offset": 256, 00:25:22.734 "data_size": 7936 00:25:22.734 } 00:25:22.734 ] 00:25:22.734 }' 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.734 [2024-11-06 09:18:21.654382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:22.734 [2024-11-06 09:18:21.654452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.734 [2024-11-06 09:18:21.654490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:22.734 [2024-11-06 09:18:21.654503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.734 [2024-11-06 09:18:21.654742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.734 [2024-11-06 09:18:21.654757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:22.734 [2024-11-06 09:18:21.654816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:22.734 [2024-11-06 09:18:21.654831] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:22.734 [2024-11-06 09:18:21.654844] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:22.734 [2024-11-06 09:18:21.654856] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:22.734 BaseBdev1 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.734 09:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.667 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.924 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.924 "name": "raid_bdev1", 00:25:23.924 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:23.924 "strip_size_kb": 0, 00:25:23.924 "state": "online", 00:25:23.924 "raid_level": "raid1", 00:25:23.924 "superblock": true, 00:25:23.924 "num_base_bdevs": 2, 00:25:23.924 "num_base_bdevs_discovered": 1, 00:25:23.924 "num_base_bdevs_operational": 1, 00:25:23.924 "base_bdevs_list": [ 00:25:23.924 { 00:25:23.924 "name": null, 00:25:23.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.924 "is_configured": false, 00:25:23.924 "data_offset": 0, 00:25:23.924 "data_size": 7936 00:25:23.924 }, 00:25:23.924 { 00:25:23.924 "name": "BaseBdev2", 00:25:23.924 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:23.924 "is_configured": true, 00:25:23.924 "data_offset": 256, 00:25:23.924 "data_size": 7936 00:25:23.924 } 00:25:23.924 ] 00:25:23.924 }' 00:25:23.924 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.924 09:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:24.182 "name": "raid_bdev1", 00:25:24.182 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:24.182 "strip_size_kb": 0, 00:25:24.182 "state": "online", 00:25:24.182 "raid_level": "raid1", 00:25:24.182 "superblock": true, 00:25:24.182 "num_base_bdevs": 2, 00:25:24.182 "num_base_bdevs_discovered": 1, 00:25:24.182 "num_base_bdevs_operational": 1, 00:25:24.182 "base_bdevs_list": [ 00:25:24.182 { 00:25:24.182 "name": null, 00:25:24.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.182 "is_configured": false, 00:25:24.182 "data_offset": 0, 00:25:24.182 "data_size": 7936 00:25:24.182 }, 00:25:24.182 { 00:25:24.182 "name": "BaseBdev2", 00:25:24.182 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:24.182 "is_configured": true, 00:25:24.182 "data_offset": 256, 00:25:24.182 "data_size": 7936 00:25:24.182 } 00:25:24.182 ] 00:25:24.182 }' 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:24.182 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.440 [2024-11-06 09:18:23.225992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.440 [2024-11-06 09:18:23.226166] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:24.440 [2024-11-06 09:18:23.226186] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:24.440 request: 00:25:24.440 { 00:25:24.440 "base_bdev": "BaseBdev1", 00:25:24.440 "raid_bdev": "raid_bdev1", 00:25:24.440 "method": "bdev_raid_add_base_bdev", 00:25:24.440 "req_id": 1 00:25:24.440 } 00:25:24.440 Got JSON-RPC error response 00:25:24.440 response: 00:25:24.440 { 00:25:24.440 "code": -22, 00:25:24.440 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:24.440 } 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:24.440 09:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.374 "name": "raid_bdev1", 00:25:25.374 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:25.374 "strip_size_kb": 0, 00:25:25.374 "state": "online", 00:25:25.374 "raid_level": "raid1", 00:25:25.374 "superblock": true, 00:25:25.374 "num_base_bdevs": 2, 00:25:25.374 "num_base_bdevs_discovered": 1, 00:25:25.374 "num_base_bdevs_operational": 1, 00:25:25.374 "base_bdevs_list": [ 00:25:25.374 { 00:25:25.374 "name": null, 00:25:25.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.374 "is_configured": false, 00:25:25.374 "data_offset": 0, 00:25:25.374 "data_size": 7936 00:25:25.374 }, 00:25:25.374 { 00:25:25.374 "name": "BaseBdev2", 00:25:25.374 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:25.374 "is_configured": true, 00:25:25.374 "data_offset": 256, 00:25:25.374 "data_size": 7936 00:25:25.374 } 00:25:25.374 ] 00:25:25.374 }' 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.374 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.941 "name": "raid_bdev1", 00:25:25.941 "uuid": "b6f77a32-c7a3-427a-9b15-a95027a5d97e", 00:25:25.941 "strip_size_kb": 0, 00:25:25.941 "state": "online", 00:25:25.941 "raid_level": "raid1", 00:25:25.941 "superblock": true, 00:25:25.941 "num_base_bdevs": 2, 00:25:25.941 "num_base_bdevs_discovered": 1, 00:25:25.941 "num_base_bdevs_operational": 1, 00:25:25.941 "base_bdevs_list": [ 00:25:25.941 { 00:25:25.941 "name": null, 00:25:25.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.941 "is_configured": false, 00:25:25.941 "data_offset": 0, 00:25:25.941 "data_size": 7936 00:25:25.941 }, 00:25:25.941 { 00:25:25.941 "name": "BaseBdev2", 00:25:25.941 "uuid": "4fb6298e-9d42-58cd-83ea-7abcca5f6263", 00:25:25.941 "is_configured": true, 00:25:25.941 "data_offset": 256, 00:25:25.941 "data_size": 7936 00:25:25.941 } 00:25:25.941 ] 00:25:25.941 }' 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.941 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87493 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87493 ']' 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87493 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87493 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:25.942 killing process with pid 87493 00:25:25.942 Received shutdown signal, test time was about 60.000000 seconds 00:25:25.942 00:25:25.942 Latency(us) 00:25:25.942 [2024-11-06T09:18:24.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.942 [2024-11-06T09:18:24.982Z] =================================================================================================================== 00:25:25.942 [2024-11-06T09:18:24.982Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87493' 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87493 00:25:25.942 [2024-11-06 09:18:24.901342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.942 09:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87493 00:25:25.942 [2024-11-06 09:18:24.901486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.942 [2024-11-06 09:18:24.901536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.942 [2024-11-06 09:18:24.901552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:26.508 [2024-11-06 09:18:25.243025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:27.445 09:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:25:27.445 00:25:27.445 real 0m20.179s 00:25:27.445 user 0m26.039s 00:25:27.445 sys 0m3.050s 00:25:27.445 09:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:27.445 ************************************ 00:25:27.445 END TEST raid_rebuild_test_sb_md_separate 00:25:27.445 ************************************ 00:25:27.445 09:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:27.705 09:18:26 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:25:27.705 09:18:26 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:27.705 09:18:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:27.705 09:18:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:27.705 09:18:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:27.705 ************************************ 00:25:27.705 START TEST raid_state_function_test_sb_md_interleaved 00:25:27.705 ************************************ 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:27.705 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:27.706 Process raid pid: 88182 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88182 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88182' 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88182 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88182 ']' 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.706 09:18:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.706 [2024-11-06 09:18:26.627751] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:25:27.706 [2024-11-06 09:18:26.628128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.965 [2024-11-06 09:18:26.804776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.965 [2024-11-06 09:18:26.936701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.225 [2024-11-06 09:18:27.164813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.225 [2024-11-06 09:18:27.165075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.484 [2024-11-06 09:18:27.493807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.484 [2024-11-06 09:18:27.493869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.484 [2024-11-06 09:18:27.493882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.484 [2024-11-06 09:18:27.493896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.484 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.744 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.744 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.744 "name": "Existed_Raid", 00:25:28.744 "uuid": "b9ea2969-e09b-4e89-9717-2914afdb2ee8", 00:25:28.744 "strip_size_kb": 0, 00:25:28.744 "state": "configuring", 00:25:28.744 "raid_level": "raid1", 00:25:28.744 "superblock": true, 00:25:28.744 "num_base_bdevs": 2, 00:25:28.744 "num_base_bdevs_discovered": 0, 00:25:28.744 "num_base_bdevs_operational": 2, 00:25:28.744 "base_bdevs_list": [ 00:25:28.744 { 00:25:28.744 "name": "BaseBdev1", 00:25:28.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.744 "is_configured": false, 00:25:28.744 "data_offset": 0, 00:25:28.744 "data_size": 0 00:25:28.744 }, 00:25:28.744 { 00:25:28.744 "name": "BaseBdev2", 00:25:28.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.744 "is_configured": false, 00:25:28.744 "data_offset": 0, 00:25:28.744 "data_size": 0 00:25:28.744 } 00:25:28.744 ] 00:25:28.744 }' 00:25:28.744 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.744 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 [2024-11-06 09:18:27.941121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.003 [2024-11-06 09:18:27.941159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 [2024-11-06 09:18:27.957106] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.003 [2024-11-06 09:18:27.957160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.003 [2024-11-06 09:18:27.957172] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.003 [2024-11-06 09:18:27.957188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.003 09:18:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 [2024-11-06 09:18:28.007207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.003 BaseBdev1 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.003 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.003 [ 00:25:29.003 { 00:25:29.003 "name": "BaseBdev1", 00:25:29.003 "aliases": [ 00:25:29.003 "deb10003-8c10-49ab-82df-0f65c13f8431" 00:25:29.003 ], 00:25:29.003 "product_name": "Malloc disk", 00:25:29.003 "block_size": 4128, 00:25:29.003 "num_blocks": 8192, 00:25:29.003 "uuid": "deb10003-8c10-49ab-82df-0f65c13f8431", 00:25:29.267 "md_size": 32, 00:25:29.267 "md_interleave": true, 00:25:29.267 "dif_type": 0, 00:25:29.267 "assigned_rate_limits": { 00:25:29.267 "rw_ios_per_sec": 0, 00:25:29.267 "rw_mbytes_per_sec": 0, 00:25:29.267 "r_mbytes_per_sec": 0, 00:25:29.267 "w_mbytes_per_sec": 0 00:25:29.267 }, 00:25:29.267 "claimed": true, 00:25:29.267 "claim_type": "exclusive_write", 00:25:29.267 "zoned": false, 00:25:29.267 "supported_io_types": { 00:25:29.267 "read": true, 00:25:29.267 "write": true, 00:25:29.267 "unmap": true, 00:25:29.267 "flush": true, 00:25:29.267 "reset": true, 00:25:29.267 "nvme_admin": false, 00:25:29.267 "nvme_io": false, 00:25:29.267 "nvme_io_md": false, 00:25:29.267 "write_zeroes": true, 00:25:29.267 "zcopy": true, 00:25:29.267 "get_zone_info": false, 00:25:29.267 "zone_management": false, 00:25:29.267 "zone_append": false, 00:25:29.267 "compare": false, 00:25:29.267 "compare_and_write": false, 00:25:29.267 "abort": true, 00:25:29.267 "seek_hole": false, 00:25:29.267 "seek_data": false, 00:25:29.267 "copy": true, 00:25:29.267 "nvme_iov_md": false 00:25:29.267 }, 00:25:29.267 "memory_domains": [ 00:25:29.267 { 00:25:29.267 "dma_device_id": "system", 00:25:29.267 "dma_device_type": 1 00:25:29.267 }, 00:25:29.267 { 00:25:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.268 "dma_device_type": 2 00:25:29.268 } 00:25:29.268 ], 00:25:29.268 "driver_specific": {} 00:25:29.268 } 00:25:29.268 ] 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.268 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.268 "name": "Existed_Raid", 00:25:29.268 "uuid": "e6167422-09ca-4cb0-94f0-8bb788de1b10", 00:25:29.268 "strip_size_kb": 0, 00:25:29.268 "state": "configuring", 00:25:29.269 "raid_level": "raid1", 00:25:29.269 "superblock": true, 00:25:29.269 "num_base_bdevs": 2, 00:25:29.269 "num_base_bdevs_discovered": 1, 00:25:29.269 "num_base_bdevs_operational": 2, 00:25:29.269 "base_bdevs_list": [ 00:25:29.269 { 00:25:29.269 "name": "BaseBdev1", 00:25:29.269 "uuid": "deb10003-8c10-49ab-82df-0f65c13f8431", 00:25:29.269 "is_configured": true, 00:25:29.269 "data_offset": 256, 00:25:29.269 "data_size": 7936 00:25:29.269 }, 00:25:29.269 { 00:25:29.269 "name": "BaseBdev2", 00:25:29.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.269 "is_configured": false, 00:25:29.269 "data_offset": 0, 00:25:29.269 "data_size": 0 00:25:29.269 } 00:25:29.269 ] 00:25:29.269 }' 00:25:29.269 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.269 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 [2024-11-06 09:18:28.470683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.530 [2024-11-06 09:18:28.470886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 [2024-11-06 09:18:28.482736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.530 [2024-11-06 09:18:28.484999] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.530 [2024-11-06 09:18:28.485054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.530 "name": "Existed_Raid", 00:25:29.530 "uuid": "c51045a8-2bd5-46b3-8363-c079c0f8163f", 00:25:29.530 "strip_size_kb": 0, 00:25:29.530 "state": "configuring", 00:25:29.530 "raid_level": "raid1", 00:25:29.530 "superblock": true, 00:25:29.530 "num_base_bdevs": 2, 00:25:29.530 "num_base_bdevs_discovered": 1, 00:25:29.530 "num_base_bdevs_operational": 2, 00:25:29.530 "base_bdevs_list": [ 00:25:29.530 { 00:25:29.530 "name": "BaseBdev1", 00:25:29.530 "uuid": "deb10003-8c10-49ab-82df-0f65c13f8431", 00:25:29.530 "is_configured": true, 00:25:29.530 "data_offset": 256, 00:25:29.530 "data_size": 7936 00:25:29.530 }, 00:25:29.530 { 00:25:29.530 "name": "BaseBdev2", 00:25:29.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.530 "is_configured": false, 00:25:29.530 "data_offset": 0, 00:25:29.530 "data_size": 0 00:25:29.530 } 00:25:29.530 ] 00:25:29.530 }' 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.530 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.101 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:30.101 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.101 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.101 [2024-11-06 09:18:28.963112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.101 [2024-11-06 09:18:28.963351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:30.101 [2024-11-06 09:18:28.963366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:30.101 [2024-11-06 09:18:28.963474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:30.101 [2024-11-06 09:18:28.963559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:30.101 [2024-11-06 09:18:28.963572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:30.101 [2024-11-06 09:18:28.963631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.101 BaseBdev2 00:25:30.101 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.101 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.102 09:18:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.102 [ 00:25:30.102 { 00:25:30.102 "name": "BaseBdev2", 00:25:30.102 "aliases": [ 00:25:30.102 "d6162e89-dbc6-4b45-8aab-a50ea8aefc34" 00:25:30.102 ], 00:25:30.102 "product_name": "Malloc disk", 00:25:30.102 "block_size": 4128, 00:25:30.102 "num_blocks": 8192, 00:25:30.102 "uuid": "d6162e89-dbc6-4b45-8aab-a50ea8aefc34", 00:25:30.102 "md_size": 32, 00:25:30.102 "md_interleave": true, 00:25:30.102 "dif_type": 0, 00:25:30.102 "assigned_rate_limits": { 00:25:30.102 "rw_ios_per_sec": 0, 00:25:30.102 "rw_mbytes_per_sec": 0, 00:25:30.102 "r_mbytes_per_sec": 0, 00:25:30.102 "w_mbytes_per_sec": 0 00:25:30.102 }, 00:25:30.102 "claimed": true, 00:25:30.102 "claim_type": "exclusive_write", 00:25:30.102 "zoned": false, 00:25:30.102 "supported_io_types": { 00:25:30.102 "read": true, 00:25:30.102 "write": true, 00:25:30.102 "unmap": true, 00:25:30.102 "flush": true, 00:25:30.102 "reset": true, 00:25:30.102 "nvme_admin": false, 00:25:30.102 "nvme_io": false, 00:25:30.102 "nvme_io_md": false, 00:25:30.102 "write_zeroes": true, 00:25:30.102 "zcopy": true, 00:25:30.102 "get_zone_info": false, 00:25:30.102 "zone_management": false, 00:25:30.102 "zone_append": false, 00:25:30.102 "compare": false, 00:25:30.102 "compare_and_write": false, 00:25:30.102 "abort": true, 00:25:30.102 "seek_hole": false, 00:25:30.102 "seek_data": false, 00:25:30.102 "copy": true, 00:25:30.102 "nvme_iov_md": false 00:25:30.102 }, 00:25:30.102 "memory_domains": [ 00:25:30.102 { 00:25:30.102 "dma_device_id": "system", 00:25:30.102 "dma_device_type": 1 00:25:30.102 }, 00:25:30.102 { 00:25:30.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.102 "dma_device_type": 2 00:25:30.102 } 00:25:30.102 ], 00:25:30.102 "driver_specific": {} 00:25:30.102 } 00:25:30.102 ] 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.102 "name": "Existed_Raid", 00:25:30.102 "uuid": "c51045a8-2bd5-46b3-8363-c079c0f8163f", 00:25:30.102 "strip_size_kb": 0, 00:25:30.102 "state": "online", 00:25:30.102 "raid_level": "raid1", 00:25:30.102 "superblock": true, 00:25:30.102 "num_base_bdevs": 2, 00:25:30.102 "num_base_bdevs_discovered": 2, 00:25:30.102 "num_base_bdevs_operational": 2, 00:25:30.102 "base_bdevs_list": [ 00:25:30.102 { 00:25:30.102 "name": "BaseBdev1", 00:25:30.102 "uuid": "deb10003-8c10-49ab-82df-0f65c13f8431", 00:25:30.102 "is_configured": true, 00:25:30.102 "data_offset": 256, 00:25:30.102 "data_size": 7936 00:25:30.102 }, 00:25:30.102 { 00:25:30.102 "name": "BaseBdev2", 00:25:30.102 "uuid": "d6162e89-dbc6-4b45-8aab-a50ea8aefc34", 00:25:30.102 "is_configured": true, 00:25:30.102 "data_offset": 256, 00:25:30.102 "data_size": 7936 00:25:30.102 } 00:25:30.102 ] 00:25:30.102 }' 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.102 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.675 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.676 [2024-11-06 09:18:29.490753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:30.676 "name": "Existed_Raid", 00:25:30.676 "aliases": [ 00:25:30.676 "c51045a8-2bd5-46b3-8363-c079c0f8163f" 00:25:30.676 ], 00:25:30.676 "product_name": "Raid Volume", 00:25:30.676 "block_size": 4128, 00:25:30.676 "num_blocks": 7936, 00:25:30.676 "uuid": "c51045a8-2bd5-46b3-8363-c079c0f8163f", 00:25:30.676 "md_size": 32, 00:25:30.676 "md_interleave": true, 00:25:30.676 "dif_type": 0, 00:25:30.676 "assigned_rate_limits": { 00:25:30.676 "rw_ios_per_sec": 0, 00:25:30.676 "rw_mbytes_per_sec": 0, 00:25:30.676 "r_mbytes_per_sec": 0, 00:25:30.676 "w_mbytes_per_sec": 0 00:25:30.676 }, 00:25:30.676 "claimed": false, 00:25:30.676 "zoned": false, 00:25:30.676 "supported_io_types": { 00:25:30.676 "read": true, 00:25:30.676 "write": true, 00:25:30.676 "unmap": false, 00:25:30.676 "flush": false, 00:25:30.676 "reset": true, 00:25:30.676 "nvme_admin": false, 00:25:30.676 "nvme_io": false, 00:25:30.676 "nvme_io_md": false, 00:25:30.676 "write_zeroes": true, 00:25:30.676 "zcopy": false, 00:25:30.676 "get_zone_info": false, 00:25:30.676 "zone_management": false, 00:25:30.676 "zone_append": false, 00:25:30.676 "compare": false, 00:25:30.676 "compare_and_write": false, 00:25:30.676 "abort": false, 00:25:30.676 "seek_hole": false, 00:25:30.676 "seek_data": false, 00:25:30.676 "copy": false, 00:25:30.676 "nvme_iov_md": false 00:25:30.676 }, 00:25:30.676 "memory_domains": [ 00:25:30.676 { 00:25:30.676 "dma_device_id": "system", 00:25:30.676 "dma_device_type": 1 00:25:30.676 }, 00:25:30.676 { 00:25:30.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.676 "dma_device_type": 2 00:25:30.676 }, 00:25:30.676 { 00:25:30.676 "dma_device_id": "system", 00:25:30.676 "dma_device_type": 1 00:25:30.676 }, 00:25:30.676 { 00:25:30.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.676 "dma_device_type": 2 00:25:30.676 } 00:25:30.676 ], 00:25:30.676 "driver_specific": { 00:25:30.676 "raid": { 00:25:30.676 "uuid": "c51045a8-2bd5-46b3-8363-c079c0f8163f", 00:25:30.676 "strip_size_kb": 0, 00:25:30.676 "state": "online", 00:25:30.676 "raid_level": "raid1", 00:25:30.676 "superblock": true, 00:25:30.676 "num_base_bdevs": 2, 00:25:30.676 "num_base_bdevs_discovered": 2, 00:25:30.676 "num_base_bdevs_operational": 2, 00:25:30.676 "base_bdevs_list": [ 00:25:30.676 { 00:25:30.676 "name": "BaseBdev1", 00:25:30.676 "uuid": "deb10003-8c10-49ab-82df-0f65c13f8431", 00:25:30.676 "is_configured": true, 00:25:30.676 "data_offset": 256, 00:25:30.676 "data_size": 7936 00:25:30.676 }, 00:25:30.676 { 00:25:30.676 "name": "BaseBdev2", 00:25:30.676 "uuid": "d6162e89-dbc6-4b45-8aab-a50ea8aefc34", 00:25:30.676 "is_configured": true, 00:25:30.676 "data_offset": 256, 00:25:30.676 "data_size": 7936 00:25:30.676 } 00:25:30.676 ] 00:25:30.676 } 00:25:30.676 } 00:25:30.676 }' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:30.676 BaseBdev2' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:30.676 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.935 [2024-11-06 09:18:29.734404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:30.935 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.936 "name": "Existed_Raid", 00:25:30.936 "uuid": "c51045a8-2bd5-46b3-8363-c079c0f8163f", 00:25:30.936 "strip_size_kb": 0, 00:25:30.936 "state": "online", 00:25:30.936 "raid_level": "raid1", 00:25:30.936 "superblock": true, 00:25:30.936 "num_base_bdevs": 2, 00:25:30.936 "num_base_bdevs_discovered": 1, 00:25:30.936 "num_base_bdevs_operational": 1, 00:25:30.936 "base_bdevs_list": [ 00:25:30.936 { 00:25:30.936 "name": null, 00:25:30.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.936 "is_configured": false, 00:25:30.936 "data_offset": 0, 00:25:30.936 "data_size": 7936 00:25:30.936 }, 00:25:30.936 { 00:25:30.936 "name": "BaseBdev2", 00:25:30.936 "uuid": "d6162e89-dbc6-4b45-8aab-a50ea8aefc34", 00:25:30.936 "is_configured": true, 00:25:30.936 "data_offset": 256, 00:25:30.936 "data_size": 7936 00:25:30.936 } 00:25:30.936 ] 00:25:30.936 }' 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.936 09:18:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.504 [2024-11-06 09:18:30.308473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:31.504 [2024-11-06 09:18:30.308589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:31.504 [2024-11-06 09:18:30.411980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:31.504 [2024-11-06 09:18:30.412650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:31.504 [2024-11-06 09:18:30.412684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:31.504 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88182 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88182 ']' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88182 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88182 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:31.505 killing process with pid 88182 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88182' 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88182 00:25:31.505 [2024-11-06 09:18:30.514432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:31.505 09:18:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88182 00:25:31.505 [2024-11-06 09:18:30.532710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:32.934 09:18:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:25:32.934 00:25:32.934 real 0m5.200s 00:25:32.934 user 0m7.377s 00:25:32.934 sys 0m1.038s 00:25:32.934 09:18:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:32.934 ************************************ 00:25:32.934 END TEST raid_state_function_test_sb_md_interleaved 00:25:32.934 ************************************ 00:25:32.934 09:18:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.934 09:18:31 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:32.934 09:18:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:32.934 09:18:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:32.934 09:18:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:32.934 ************************************ 00:25:32.934 START TEST raid_superblock_test_md_interleaved 00:25:32.934 ************************************ 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88434 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88434 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88434 ']' 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:32.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:32.934 09:18:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.934 [2024-11-06 09:18:31.895768] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:25:32.935 [2024-11-06 09:18:31.895902] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88434 ] 00:25:33.193 [2024-11-06 09:18:32.068020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.193 [2024-11-06 09:18:32.195160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.452 [2024-11-06 09:18:32.413820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.452 [2024-11-06 09:18:32.413893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.021 malloc1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.021 [2024-11-06 09:18:32.824647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:34.021 [2024-11-06 09:18:32.824717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.021 [2024-11-06 09:18:32.824747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:34.021 [2024-11-06 09:18:32.824760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.021 [2024-11-06 09:18:32.827030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.021 [2024-11-06 09:18:32.827208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:34.021 pt1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.021 malloc2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.021 [2024-11-06 09:18:32.891084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:34.021 [2024-11-06 09:18:32.891156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.021 [2024-11-06 09:18:32.891185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:34.021 [2024-11-06 09:18:32.891198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.021 [2024-11-06 09:18:32.893488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.021 [2024-11-06 09:18:32.893541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:34.021 pt2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.021 [2024-11-06 09:18:32.903110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:34.021 [2024-11-06 09:18:32.905634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:34.021 [2024-11-06 09:18:32.905985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:34.021 [2024-11-06 09:18:32.906103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:34.021 [2024-11-06 09:18:32.906286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:34.021 [2024-11-06 09:18:32.906609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:34.021 [2024-11-06 09:18:32.906663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:34.021 [2024-11-06 09:18:32.907018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.021 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:34.021 "name": "raid_bdev1", 00:25:34.021 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:34.021 "strip_size_kb": 0, 00:25:34.021 "state": "online", 00:25:34.021 "raid_level": "raid1", 00:25:34.021 "superblock": true, 00:25:34.021 "num_base_bdevs": 2, 00:25:34.021 "num_base_bdevs_discovered": 2, 00:25:34.021 "num_base_bdevs_operational": 2, 00:25:34.021 "base_bdevs_list": [ 00:25:34.021 { 00:25:34.021 "name": "pt1", 00:25:34.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:34.021 "is_configured": true, 00:25:34.021 "data_offset": 256, 00:25:34.021 "data_size": 7936 00:25:34.021 }, 00:25:34.021 { 00:25:34.021 "name": "pt2", 00:25:34.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:34.021 "is_configured": true, 00:25:34.022 "data_offset": 256, 00:25:34.022 "data_size": 7936 00:25:34.022 } 00:25:34.022 ] 00:25:34.022 }' 00:25:34.022 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:34.022 09:18:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:34.590 [2024-11-06 09:18:33.418746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:34.590 "name": "raid_bdev1", 00:25:34.590 "aliases": [ 00:25:34.590 "72a0f869-3602-46c0-9d97-485476130b29" 00:25:34.590 ], 00:25:34.590 "product_name": "Raid Volume", 00:25:34.590 "block_size": 4128, 00:25:34.590 "num_blocks": 7936, 00:25:34.590 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:34.590 "md_size": 32, 00:25:34.590 "md_interleave": true, 00:25:34.590 "dif_type": 0, 00:25:34.590 "assigned_rate_limits": { 00:25:34.590 "rw_ios_per_sec": 0, 00:25:34.590 "rw_mbytes_per_sec": 0, 00:25:34.590 "r_mbytes_per_sec": 0, 00:25:34.590 "w_mbytes_per_sec": 0 00:25:34.590 }, 00:25:34.590 "claimed": false, 00:25:34.590 "zoned": false, 00:25:34.590 "supported_io_types": { 00:25:34.590 "read": true, 00:25:34.590 "write": true, 00:25:34.590 "unmap": false, 00:25:34.590 "flush": false, 00:25:34.590 "reset": true, 00:25:34.590 "nvme_admin": false, 00:25:34.590 "nvme_io": false, 00:25:34.590 "nvme_io_md": false, 00:25:34.590 "write_zeroes": true, 00:25:34.590 "zcopy": false, 00:25:34.590 "get_zone_info": false, 00:25:34.590 "zone_management": false, 00:25:34.590 "zone_append": false, 00:25:34.590 "compare": false, 00:25:34.590 "compare_and_write": false, 00:25:34.590 "abort": false, 00:25:34.590 "seek_hole": false, 00:25:34.590 "seek_data": false, 00:25:34.590 "copy": false, 00:25:34.590 "nvme_iov_md": false 00:25:34.590 }, 00:25:34.590 "memory_domains": [ 00:25:34.590 { 00:25:34.590 "dma_device_id": "system", 00:25:34.590 "dma_device_type": 1 00:25:34.590 }, 00:25:34.590 { 00:25:34.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.590 "dma_device_type": 2 00:25:34.590 }, 00:25:34.590 { 00:25:34.590 "dma_device_id": "system", 00:25:34.590 "dma_device_type": 1 00:25:34.590 }, 00:25:34.590 { 00:25:34.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.590 "dma_device_type": 2 00:25:34.590 } 00:25:34.590 ], 00:25:34.590 "driver_specific": { 00:25:34.590 "raid": { 00:25:34.590 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:34.590 "strip_size_kb": 0, 00:25:34.590 "state": "online", 00:25:34.590 "raid_level": "raid1", 00:25:34.590 "superblock": true, 00:25:34.590 "num_base_bdevs": 2, 00:25:34.590 "num_base_bdevs_discovered": 2, 00:25:34.590 "num_base_bdevs_operational": 2, 00:25:34.590 "base_bdevs_list": [ 00:25:34.590 { 00:25:34.590 "name": "pt1", 00:25:34.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:34.590 "is_configured": true, 00:25:34.590 "data_offset": 256, 00:25:34.590 "data_size": 7936 00:25:34.590 }, 00:25:34.590 { 00:25:34.590 "name": "pt2", 00:25:34.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:34.590 "is_configured": true, 00:25:34.590 "data_offset": 256, 00:25:34.590 "data_size": 7936 00:25:34.590 } 00:25:34.590 ] 00:25:34.590 } 00:25:34.590 } 00:25:34.590 }' 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:34.590 pt2' 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.590 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:34.591 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:34.591 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:34.591 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:34.591 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:34.591 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.591 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:34.850 [2024-11-06 09:18:33.658673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=72a0f869-3602-46c0-9d97-485476130b29 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 72a0f869-3602-46c0-9d97-485476130b29 ']' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 [2024-11-06 09:18:33.702367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:34.850 [2024-11-06 09:18:33.702400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:34.850 [2024-11-06 09:18:33.702517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:34.850 [2024-11-06 09:18:33.702580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:34.850 [2024-11-06 09:18:33.702596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 [2024-11-06 09:18:33.838392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:34.850 [2024-11-06 09:18:33.840822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:34.850 [2024-11-06 09:18:33.840915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:34.850 [2024-11-06 09:18:33.840985] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:34.850 [2024-11-06 09:18:33.841006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:34.850 [2024-11-06 09:18:33.841020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:34.850 request: 00:25:34.850 { 00:25:34.850 "name": "raid_bdev1", 00:25:34.850 "raid_level": "raid1", 00:25:34.850 "base_bdevs": [ 00:25:34.850 "malloc1", 00:25:34.850 "malloc2" 00:25:34.850 ], 00:25:34.850 "superblock": false, 00:25:34.850 "method": "bdev_raid_create", 00:25:34.850 "req_id": 1 00:25:34.850 } 00:25:34.850 Got JSON-RPC error response 00:25:34.850 response: 00:25:34.850 { 00:25:34.850 "code": -17, 00:25:34.850 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:34.850 } 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:34.850 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.110 [2024-11-06 09:18:33.914381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:35.110 [2024-11-06 09:18:33.914458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.110 [2024-11-06 09:18:33.914481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:35.110 [2024-11-06 09:18:33.914497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.110 [2024-11-06 09:18:33.916882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.110 [2024-11-06 09:18:33.916932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:35.110 [2024-11-06 09:18:33.917003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:35.110 [2024-11-06 09:18:33.917087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:35.110 pt1 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.110 "name": "raid_bdev1", 00:25:35.110 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:35.110 "strip_size_kb": 0, 00:25:35.110 "state": "configuring", 00:25:35.110 "raid_level": "raid1", 00:25:35.110 "superblock": true, 00:25:35.110 "num_base_bdevs": 2, 00:25:35.110 "num_base_bdevs_discovered": 1, 00:25:35.110 "num_base_bdevs_operational": 2, 00:25:35.110 "base_bdevs_list": [ 00:25:35.110 { 00:25:35.110 "name": "pt1", 00:25:35.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.110 "is_configured": true, 00:25:35.110 "data_offset": 256, 00:25:35.110 "data_size": 7936 00:25:35.110 }, 00:25:35.110 { 00:25:35.110 "name": null, 00:25:35.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.110 "is_configured": false, 00:25:35.110 "data_offset": 256, 00:25:35.110 "data_size": 7936 00:25:35.110 } 00:25:35.110 ] 00:25:35.110 }' 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.110 09:18:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.369 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:35.369 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:35.369 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:35.369 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:35.369 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.369 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.369 [2024-11-06 09:18:34.386387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:35.369 [2024-11-06 09:18:34.386481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.369 [2024-11-06 09:18:34.386507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:35.369 [2024-11-06 09:18:34.386522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.369 [2024-11-06 09:18:34.386715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.369 [2024-11-06 09:18:34.386733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:35.369 [2024-11-06 09:18:34.386792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:35.369 [2024-11-06 09:18:34.386822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:35.369 [2024-11-06 09:18:34.386916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:35.369 [2024-11-06 09:18:34.386932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:35.369 [2024-11-06 09:18:34.387002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:35.369 [2024-11-06 09:18:34.387076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:35.369 [2024-11-06 09:18:34.387086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:35.369 [2024-11-06 09:18:34.387155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.369 pt2 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.370 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.629 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.630 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.630 "name": "raid_bdev1", 00:25:35.630 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:35.630 "strip_size_kb": 0, 00:25:35.630 "state": "online", 00:25:35.630 "raid_level": "raid1", 00:25:35.630 "superblock": true, 00:25:35.630 "num_base_bdevs": 2, 00:25:35.630 "num_base_bdevs_discovered": 2, 00:25:35.630 "num_base_bdevs_operational": 2, 00:25:35.630 "base_bdevs_list": [ 00:25:35.630 { 00:25:35.630 "name": "pt1", 00:25:35.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.630 "is_configured": true, 00:25:35.630 "data_offset": 256, 00:25:35.630 "data_size": 7936 00:25:35.630 }, 00:25:35.630 { 00:25:35.630 "name": "pt2", 00:25:35.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.630 "is_configured": true, 00:25:35.630 "data_offset": 256, 00:25:35.630 "data_size": 7936 00:25:35.630 } 00:25:35.630 ] 00:25:35.630 }' 00:25:35.630 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.630 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 [2024-11-06 09:18:34.878636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.901 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.901 "name": "raid_bdev1", 00:25:35.901 "aliases": [ 00:25:35.901 "72a0f869-3602-46c0-9d97-485476130b29" 00:25:35.901 ], 00:25:35.901 "product_name": "Raid Volume", 00:25:35.901 "block_size": 4128, 00:25:35.901 "num_blocks": 7936, 00:25:35.901 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:35.901 "md_size": 32, 00:25:35.901 "md_interleave": true, 00:25:35.901 "dif_type": 0, 00:25:35.901 "assigned_rate_limits": { 00:25:35.901 "rw_ios_per_sec": 0, 00:25:35.901 "rw_mbytes_per_sec": 0, 00:25:35.901 "r_mbytes_per_sec": 0, 00:25:35.901 "w_mbytes_per_sec": 0 00:25:35.901 }, 00:25:35.901 "claimed": false, 00:25:35.901 "zoned": false, 00:25:35.901 "supported_io_types": { 00:25:35.901 "read": true, 00:25:35.901 "write": true, 00:25:35.901 "unmap": false, 00:25:35.901 "flush": false, 00:25:35.901 "reset": true, 00:25:35.902 "nvme_admin": false, 00:25:35.902 "nvme_io": false, 00:25:35.902 "nvme_io_md": false, 00:25:35.902 "write_zeroes": true, 00:25:35.902 "zcopy": false, 00:25:35.902 "get_zone_info": false, 00:25:35.902 "zone_management": false, 00:25:35.902 "zone_append": false, 00:25:35.902 "compare": false, 00:25:35.902 "compare_and_write": false, 00:25:35.902 "abort": false, 00:25:35.902 "seek_hole": false, 00:25:35.902 "seek_data": false, 00:25:35.902 "copy": false, 00:25:35.902 "nvme_iov_md": false 00:25:35.902 }, 00:25:35.902 "memory_domains": [ 00:25:35.902 { 00:25:35.902 "dma_device_id": "system", 00:25:35.902 "dma_device_type": 1 00:25:35.902 }, 00:25:35.902 { 00:25:35.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.902 "dma_device_type": 2 00:25:35.902 }, 00:25:35.902 { 00:25:35.902 "dma_device_id": "system", 00:25:35.902 "dma_device_type": 1 00:25:35.902 }, 00:25:35.902 { 00:25:35.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.902 "dma_device_type": 2 00:25:35.902 } 00:25:35.902 ], 00:25:35.902 "driver_specific": { 00:25:35.902 "raid": { 00:25:35.902 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:35.902 "strip_size_kb": 0, 00:25:35.902 "state": "online", 00:25:35.902 "raid_level": "raid1", 00:25:35.902 "superblock": true, 00:25:35.902 "num_base_bdevs": 2, 00:25:35.902 "num_base_bdevs_discovered": 2, 00:25:35.902 "num_base_bdevs_operational": 2, 00:25:35.902 "base_bdevs_list": [ 00:25:35.902 { 00:25:35.902 "name": "pt1", 00:25:35.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.902 "is_configured": true, 00:25:35.902 "data_offset": 256, 00:25:35.902 "data_size": 7936 00:25:35.902 }, 00:25:35.902 { 00:25:35.902 "name": "pt2", 00:25:35.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.902 "is_configured": true, 00:25:35.902 "data_offset": 256, 00:25:35.902 "data_size": 7936 00:25:35.902 } 00:25:35.902 ] 00:25:35.902 } 00:25:35.902 } 00:25:35.902 }' 00:25:35.902 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:36.161 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:36.161 pt2' 00:25:36.161 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.161 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:36.161 09:18:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.161 [2024-11-06 09:18:35.110349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 72a0f869-3602-46c0-9d97-485476130b29 '!=' 72a0f869-3602-46c0-9d97-485476130b29 ']' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.161 [2024-11-06 09:18:35.154016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.161 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.421 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.421 "name": "raid_bdev1", 00:25:36.421 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:36.421 "strip_size_kb": 0, 00:25:36.421 "state": "online", 00:25:36.421 "raid_level": "raid1", 00:25:36.421 "superblock": true, 00:25:36.421 "num_base_bdevs": 2, 00:25:36.421 "num_base_bdevs_discovered": 1, 00:25:36.421 "num_base_bdevs_operational": 1, 00:25:36.421 "base_bdevs_list": [ 00:25:36.421 { 00:25:36.421 "name": null, 00:25:36.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.421 "is_configured": false, 00:25:36.421 "data_offset": 0, 00:25:36.421 "data_size": 7936 00:25:36.421 }, 00:25:36.421 { 00:25:36.421 "name": "pt2", 00:25:36.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.421 "is_configured": true, 00:25:36.421 "data_offset": 256, 00:25:36.421 "data_size": 7936 00:25:36.421 } 00:25:36.421 ] 00:25:36.421 }' 00:25:36.421 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.421 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.680 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.681 [2024-11-06 09:18:35.625309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.681 [2024-11-06 09:18:35.625342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.681 [2024-11-06 09:18:35.625428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.681 [2024-11-06 09:18:35.625481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.681 [2024-11-06 09:18:35.625497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.681 [2024-11-06 09:18:35.697210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:36.681 [2024-11-06 09:18:35.697304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.681 [2024-11-06 09:18:35.697327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:36.681 [2024-11-06 09:18:35.697343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.681 [2024-11-06 09:18:35.699783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.681 [2024-11-06 09:18:35.699833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:36.681 [2024-11-06 09:18:35.699902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:36.681 [2024-11-06 09:18:35.699961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:36.681 [2024-11-06 09:18:35.700032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:36.681 [2024-11-06 09:18:35.700047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:36.681 [2024-11-06 09:18:35.700145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:36.681 [2024-11-06 09:18:35.700210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:36.681 [2024-11-06 09:18:35.700219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:36.681 [2024-11-06 09:18:35.700320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.681 pt2 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.681 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.940 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.940 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.940 "name": "raid_bdev1", 00:25:36.940 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:36.940 "strip_size_kb": 0, 00:25:36.940 "state": "online", 00:25:36.940 "raid_level": "raid1", 00:25:36.940 "superblock": true, 00:25:36.940 "num_base_bdevs": 2, 00:25:36.940 "num_base_bdevs_discovered": 1, 00:25:36.940 "num_base_bdevs_operational": 1, 00:25:36.940 "base_bdevs_list": [ 00:25:36.940 { 00:25:36.940 "name": null, 00:25:36.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.940 "is_configured": false, 00:25:36.940 "data_offset": 256, 00:25:36.940 "data_size": 7936 00:25:36.940 }, 00:25:36.940 { 00:25:36.940 "name": "pt2", 00:25:36.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.940 "is_configured": true, 00:25:36.940 "data_offset": 256, 00:25:36.940 "data_size": 7936 00:25:36.940 } 00:25:36.940 ] 00:25:36.940 }' 00:25:36.940 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.940 09:18:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 [2024-11-06 09:18:36.160545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.200 [2024-11-06 09:18:36.160745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.200 [2024-11-06 09:18:36.160929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.200 [2024-11-06 09:18:36.161088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.200 [2024-11-06 09:18:36.161109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 [2024-11-06 09:18:36.216534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:37.200 [2024-11-06 09:18:36.216623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.200 [2024-11-06 09:18:36.216653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:37.200 [2024-11-06 09:18:36.216666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.200 [2024-11-06 09:18:36.219067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.200 [2024-11-06 09:18:36.219119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:37.200 [2024-11-06 09:18:36.219192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:37.200 [2024-11-06 09:18:36.219252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:37.200 [2024-11-06 09:18:36.219384] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:37.200 [2024-11-06 09:18:36.219398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.200 [2024-11-06 09:18:36.219420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:37.200 [2024-11-06 09:18:36.219495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.200 [2024-11-06 09:18:36.219572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:37.200 [2024-11-06 09:18:36.219582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:37.200 [2024-11-06 09:18:36.219657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:37.200 [2024-11-06 09:18:36.219723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:37.200 [2024-11-06 09:18:36.219737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:37.200 [2024-11-06 09:18:36.219810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.200 pt1 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.200 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.459 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.459 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.459 "name": "raid_bdev1", 00:25:37.459 "uuid": "72a0f869-3602-46c0-9d97-485476130b29", 00:25:37.459 "strip_size_kb": 0, 00:25:37.459 "state": "online", 00:25:37.459 "raid_level": "raid1", 00:25:37.459 "superblock": true, 00:25:37.459 "num_base_bdevs": 2, 00:25:37.459 "num_base_bdevs_discovered": 1, 00:25:37.459 "num_base_bdevs_operational": 1, 00:25:37.459 "base_bdevs_list": [ 00:25:37.459 { 00:25:37.459 "name": null, 00:25:37.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.459 "is_configured": false, 00:25:37.459 "data_offset": 256, 00:25:37.459 "data_size": 7936 00:25:37.459 }, 00:25:37.459 { 00:25:37.459 "name": "pt2", 00:25:37.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.459 "is_configured": true, 00:25:37.459 "data_offset": 256, 00:25:37.459 "data_size": 7936 00:25:37.459 } 00:25:37.459 ] 00:25:37.459 }' 00:25:37.459 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.459 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.718 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.718 [2024-11-06 09:18:36.732160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 72a0f869-3602-46c0-9d97-485476130b29 '!=' 72a0f869-3602-46c0-9d97-485476130b29 ']' 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88434 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88434 ']' 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88434 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88434 00:25:37.977 killing process with pid 88434 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88434' 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 88434 00:25:37.977 [2024-11-06 09:18:36.816637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:37.977 [2024-11-06 09:18:36.816741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.977 09:18:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 88434 00:25:37.977 [2024-11-06 09:18:36.816793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.977 [2024-11-06 09:18:36.816812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:38.236 [2024-11-06 09:18:37.041010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.614 09:18:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:25:39.614 00:25:39.614 real 0m6.436s 00:25:39.614 user 0m9.744s 00:25:39.614 sys 0m1.307s 00:25:39.614 09:18:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:39.614 ************************************ 00:25:39.614 END TEST raid_superblock_test_md_interleaved 00:25:39.614 ************************************ 00:25:39.614 09:18:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:39.614 09:18:38 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:39.614 09:18:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:39.614 09:18:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:39.614 09:18:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.614 ************************************ 00:25:39.614 START TEST raid_rebuild_test_sb_md_interleaved 00:25:39.614 ************************************ 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88763 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:39.614 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88763 00:25:39.615 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88763 ']' 00:25:39.615 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.615 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:39.615 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.615 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:39.615 09:18:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:39.615 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:39.615 Zero copy mechanism will not be used. 00:25:39.615 [2024-11-06 09:18:38.429642] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:25:39.615 [2024-11-06 09:18:38.429806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88763 ] 00:25:39.615 [2024-11-06 09:18:38.618180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.873 [2024-11-06 09:18:38.769053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.132 [2024-11-06 09:18:38.997187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.132 [2024-11-06 09:18:38.997240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.422 BaseBdev1_malloc 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.422 [2024-11-06 09:18:39.358368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:40.422 [2024-11-06 09:18:39.358441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.422 [2024-11-06 09:18:39.358466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:40.422 [2024-11-06 09:18:39.358482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.422 [2024-11-06 09:18:39.360815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.422 [2024-11-06 09:18:39.360862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:40.422 BaseBdev1 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.422 BaseBdev2_malloc 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.422 [2024-11-06 09:18:39.416017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:40.422 [2024-11-06 09:18:39.416103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.422 [2024-11-06 09:18:39.416128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:40.422 [2024-11-06 09:18:39.416146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.422 [2024-11-06 09:18:39.418455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.422 [2024-11-06 09:18:39.418631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:40.422 BaseBdev2 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.422 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.719 spare_malloc 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.719 spare_delay 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.719 [2024-11-06 09:18:39.499300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:40.719 [2024-11-06 09:18:39.499568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.719 [2024-11-06 09:18:39.499608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:40.719 [2024-11-06 09:18:39.499626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.719 [2024-11-06 09:18:39.502253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.719 [2024-11-06 09:18:39.502309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:40.719 spare 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.719 [2024-11-06 09:18:39.511367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.719 [2024-11-06 09:18:39.513614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:40.719 [2024-11-06 09:18:39.513829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:40.719 [2024-11-06 09:18:39.513846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:40.719 [2024-11-06 09:18:39.513949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:40.719 [2024-11-06 09:18:39.514029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:40.719 [2024-11-06 09:18:39.514039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:40.719 [2024-11-06 09:18:39.514120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.719 "name": "raid_bdev1", 00:25:40.719 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:40.719 "strip_size_kb": 0, 00:25:40.719 "state": "online", 00:25:40.719 "raid_level": "raid1", 00:25:40.719 "superblock": true, 00:25:40.719 "num_base_bdevs": 2, 00:25:40.719 "num_base_bdevs_discovered": 2, 00:25:40.719 "num_base_bdevs_operational": 2, 00:25:40.719 "base_bdevs_list": [ 00:25:40.719 { 00:25:40.719 "name": "BaseBdev1", 00:25:40.719 "uuid": "9d8b0454-27c0-5ded-a81c-4eb7df558a16", 00:25:40.719 "is_configured": true, 00:25:40.719 "data_offset": 256, 00:25:40.719 "data_size": 7936 00:25:40.719 }, 00:25:40.719 { 00:25:40.719 "name": "BaseBdev2", 00:25:40.719 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:40.719 "is_configured": true, 00:25:40.719 "data_offset": 256, 00:25:40.719 "data_size": 7936 00:25:40.719 } 00:25:40.719 ] 00:25:40.719 }' 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.719 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.988 [2024-11-06 09:18:39.954973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.988 09:18:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:40.988 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.247 [2024-11-06 09:18:40.042576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.247 "name": "raid_bdev1", 00:25:41.247 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:41.247 "strip_size_kb": 0, 00:25:41.247 "state": "online", 00:25:41.247 "raid_level": "raid1", 00:25:41.247 "superblock": true, 00:25:41.247 "num_base_bdevs": 2, 00:25:41.247 "num_base_bdevs_discovered": 1, 00:25:41.247 "num_base_bdevs_operational": 1, 00:25:41.247 "base_bdevs_list": [ 00:25:41.247 { 00:25:41.247 "name": null, 00:25:41.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.247 "is_configured": false, 00:25:41.247 "data_offset": 0, 00:25:41.247 "data_size": 7936 00:25:41.247 }, 00:25:41.247 { 00:25:41.247 "name": "BaseBdev2", 00:25:41.247 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:41.247 "is_configured": true, 00:25:41.247 "data_offset": 256, 00:25:41.247 "data_size": 7936 00:25:41.247 } 00:25:41.247 ] 00:25:41.247 }' 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.247 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.506 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:41.506 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.506 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.506 [2024-11-06 09:18:40.478425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:41.506 [2024-11-06 09:18:40.498019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:41.506 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.506 09:18:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:41.506 [2024-11-06 09:18:40.500576] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:42.881 "name": "raid_bdev1", 00:25:42.881 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:42.881 "strip_size_kb": 0, 00:25:42.881 "state": "online", 00:25:42.881 "raid_level": "raid1", 00:25:42.881 "superblock": true, 00:25:42.881 "num_base_bdevs": 2, 00:25:42.881 "num_base_bdevs_discovered": 2, 00:25:42.881 "num_base_bdevs_operational": 2, 00:25:42.881 "process": { 00:25:42.881 "type": "rebuild", 00:25:42.881 "target": "spare", 00:25:42.881 "progress": { 00:25:42.881 "blocks": 2560, 00:25:42.881 "percent": 32 00:25:42.881 } 00:25:42.881 }, 00:25:42.881 "base_bdevs_list": [ 00:25:42.881 { 00:25:42.881 "name": "spare", 00:25:42.881 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:42.881 "is_configured": true, 00:25:42.881 "data_offset": 256, 00:25:42.881 "data_size": 7936 00:25:42.881 }, 00:25:42.881 { 00:25:42.881 "name": "BaseBdev2", 00:25:42.881 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:42.881 "is_configured": true, 00:25:42.881 "data_offset": 256, 00:25:42.881 "data_size": 7936 00:25:42.881 } 00:25:42.881 ] 00:25:42.881 }' 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.881 [2024-11-06 09:18:41.644082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:42.881 [2024-11-06 09:18:41.707126] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:42.881 [2024-11-06 09:18:41.707480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.881 [2024-11-06 09:18:41.707505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:42.881 [2024-11-06 09:18:41.707520] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.881 "name": "raid_bdev1", 00:25:42.881 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:42.881 "strip_size_kb": 0, 00:25:42.881 "state": "online", 00:25:42.881 "raid_level": "raid1", 00:25:42.881 "superblock": true, 00:25:42.881 "num_base_bdevs": 2, 00:25:42.881 "num_base_bdevs_discovered": 1, 00:25:42.881 "num_base_bdevs_operational": 1, 00:25:42.881 "base_bdevs_list": [ 00:25:42.881 { 00:25:42.881 "name": null, 00:25:42.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.881 "is_configured": false, 00:25:42.881 "data_offset": 0, 00:25:42.881 "data_size": 7936 00:25:42.881 }, 00:25:42.881 { 00:25:42.881 "name": "BaseBdev2", 00:25:42.881 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:42.881 "is_configured": true, 00:25:42.881 "data_offset": 256, 00:25:42.881 "data_size": 7936 00:25:42.881 } 00:25:42.881 ] 00:25:42.881 }' 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.881 09:18:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.446 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:43.447 "name": "raid_bdev1", 00:25:43.447 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:43.447 "strip_size_kb": 0, 00:25:43.447 "state": "online", 00:25:43.447 "raid_level": "raid1", 00:25:43.447 "superblock": true, 00:25:43.447 "num_base_bdevs": 2, 00:25:43.447 "num_base_bdevs_discovered": 1, 00:25:43.447 "num_base_bdevs_operational": 1, 00:25:43.447 "base_bdevs_list": [ 00:25:43.447 { 00:25:43.447 "name": null, 00:25:43.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.447 "is_configured": false, 00:25:43.447 "data_offset": 0, 00:25:43.447 "data_size": 7936 00:25:43.447 }, 00:25:43.447 { 00:25:43.447 "name": "BaseBdev2", 00:25:43.447 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:43.447 "is_configured": true, 00:25:43.447 "data_offset": 256, 00:25:43.447 "data_size": 7936 00:25:43.447 } 00:25:43.447 ] 00:25:43.447 }' 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.447 [2024-11-06 09:18:42.350449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:43.447 [2024-11-06 09:18:42.368543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.447 09:18:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:43.447 [2024-11-06 09:18:42.370906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.378 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.635 "name": "raid_bdev1", 00:25:44.635 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:44.635 "strip_size_kb": 0, 00:25:44.635 "state": "online", 00:25:44.635 "raid_level": "raid1", 00:25:44.635 "superblock": true, 00:25:44.635 "num_base_bdevs": 2, 00:25:44.635 "num_base_bdevs_discovered": 2, 00:25:44.635 "num_base_bdevs_operational": 2, 00:25:44.635 "process": { 00:25:44.635 "type": "rebuild", 00:25:44.635 "target": "spare", 00:25:44.635 "progress": { 00:25:44.635 "blocks": 2560, 00:25:44.635 "percent": 32 00:25:44.635 } 00:25:44.635 }, 00:25:44.635 "base_bdevs_list": [ 00:25:44.635 { 00:25:44.635 "name": "spare", 00:25:44.635 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:44.635 "is_configured": true, 00:25:44.635 "data_offset": 256, 00:25:44.635 "data_size": 7936 00:25:44.635 }, 00:25:44.635 { 00:25:44.635 "name": "BaseBdev2", 00:25:44.635 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:44.635 "is_configured": true, 00:25:44.635 "data_offset": 256, 00:25:44.635 "data_size": 7936 00:25:44.635 } 00:25:44.635 ] 00:25:44.635 }' 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:44.635 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:44.635 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=738 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.636 "name": "raid_bdev1", 00:25:44.636 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:44.636 "strip_size_kb": 0, 00:25:44.636 "state": "online", 00:25:44.636 "raid_level": "raid1", 00:25:44.636 "superblock": true, 00:25:44.636 "num_base_bdevs": 2, 00:25:44.636 "num_base_bdevs_discovered": 2, 00:25:44.636 "num_base_bdevs_operational": 2, 00:25:44.636 "process": { 00:25:44.636 "type": "rebuild", 00:25:44.636 "target": "spare", 00:25:44.636 "progress": { 00:25:44.636 "blocks": 2816, 00:25:44.636 "percent": 35 00:25:44.636 } 00:25:44.636 }, 00:25:44.636 "base_bdevs_list": [ 00:25:44.636 { 00:25:44.636 "name": "spare", 00:25:44.636 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:44.636 "is_configured": true, 00:25:44.636 "data_offset": 256, 00:25:44.636 "data_size": 7936 00:25:44.636 }, 00:25:44.636 { 00:25:44.636 "name": "BaseBdev2", 00:25:44.636 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:44.636 "is_configured": true, 00:25:44.636 "data_offset": 256, 00:25:44.636 "data_size": 7936 00:25:44.636 } 00:25:44.636 ] 00:25:44.636 }' 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.636 09:18:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.011 "name": "raid_bdev1", 00:25:46.011 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:46.011 "strip_size_kb": 0, 00:25:46.011 "state": "online", 00:25:46.011 "raid_level": "raid1", 00:25:46.011 "superblock": true, 00:25:46.011 "num_base_bdevs": 2, 00:25:46.011 "num_base_bdevs_discovered": 2, 00:25:46.011 "num_base_bdevs_operational": 2, 00:25:46.011 "process": { 00:25:46.011 "type": "rebuild", 00:25:46.011 "target": "spare", 00:25:46.011 "progress": { 00:25:46.011 "blocks": 5888, 00:25:46.011 "percent": 74 00:25:46.011 } 00:25:46.011 }, 00:25:46.011 "base_bdevs_list": [ 00:25:46.011 { 00:25:46.011 "name": "spare", 00:25:46.011 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:46.011 "is_configured": true, 00:25:46.011 "data_offset": 256, 00:25:46.011 "data_size": 7936 00:25:46.011 }, 00:25:46.011 { 00:25:46.011 "name": "BaseBdev2", 00:25:46.011 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:46.011 "is_configured": true, 00:25:46.011 "data_offset": 256, 00:25:46.011 "data_size": 7936 00:25:46.011 } 00:25:46.011 ] 00:25:46.011 }' 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.011 09:18:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:46.579 [2024-11-06 09:18:45.486302] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:46.579 [2024-11-06 09:18:45.486401] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:46.579 [2024-11-06 09:18:45.486531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.838 "name": "raid_bdev1", 00:25:46.838 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:46.838 "strip_size_kb": 0, 00:25:46.838 "state": "online", 00:25:46.838 "raid_level": "raid1", 00:25:46.838 "superblock": true, 00:25:46.838 "num_base_bdevs": 2, 00:25:46.838 "num_base_bdevs_discovered": 2, 00:25:46.838 "num_base_bdevs_operational": 2, 00:25:46.838 "base_bdevs_list": [ 00:25:46.838 { 00:25:46.838 "name": "spare", 00:25:46.838 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:46.838 "is_configured": true, 00:25:46.838 "data_offset": 256, 00:25:46.838 "data_size": 7936 00:25:46.838 }, 00:25:46.838 { 00:25:46.838 "name": "BaseBdev2", 00:25:46.838 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:46.838 "is_configured": true, 00:25:46.838 "data_offset": 256, 00:25:46.838 "data_size": 7936 00:25:46.838 } 00:25:46.838 ] 00:25:46.838 }' 00:25:46.838 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.097 09:18:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:47.097 "name": "raid_bdev1", 00:25:47.097 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:47.097 "strip_size_kb": 0, 00:25:47.097 "state": "online", 00:25:47.097 "raid_level": "raid1", 00:25:47.097 "superblock": true, 00:25:47.097 "num_base_bdevs": 2, 00:25:47.097 "num_base_bdevs_discovered": 2, 00:25:47.097 "num_base_bdevs_operational": 2, 00:25:47.097 "base_bdevs_list": [ 00:25:47.097 { 00:25:47.097 "name": "spare", 00:25:47.097 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:47.097 "is_configured": true, 00:25:47.097 "data_offset": 256, 00:25:47.097 "data_size": 7936 00:25:47.097 }, 00:25:47.097 { 00:25:47.097 "name": "BaseBdev2", 00:25:47.097 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:47.097 "is_configured": true, 00:25:47.097 "data_offset": 256, 00:25:47.097 "data_size": 7936 00:25:47.097 } 00:25:47.097 ] 00:25:47.097 }' 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.097 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.356 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.356 "name": "raid_bdev1", 00:25:47.356 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:47.356 "strip_size_kb": 0, 00:25:47.356 "state": "online", 00:25:47.356 "raid_level": "raid1", 00:25:47.356 "superblock": true, 00:25:47.356 "num_base_bdevs": 2, 00:25:47.356 "num_base_bdevs_discovered": 2, 00:25:47.356 "num_base_bdevs_operational": 2, 00:25:47.356 "base_bdevs_list": [ 00:25:47.356 { 00:25:47.356 "name": "spare", 00:25:47.356 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:47.356 "is_configured": true, 00:25:47.356 "data_offset": 256, 00:25:47.356 "data_size": 7936 00:25:47.356 }, 00:25:47.356 { 00:25:47.356 "name": "BaseBdev2", 00:25:47.356 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:47.356 "is_configured": true, 00:25:47.356 "data_offset": 256, 00:25:47.356 "data_size": 7936 00:25:47.356 } 00:25:47.356 ] 00:25:47.356 }' 00:25:47.356 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.356 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.614 [2024-11-06 09:18:46.572419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.614 [2024-11-06 09:18:46.572648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:47.614 [2024-11-06 09:18:46.572781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.614 [2024-11-06 09:18:46.572867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:47.614 [2024-11-06 09:18:46.572886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.614 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.615 [2024-11-06 09:18:46.644414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:47.615 [2024-11-06 09:18:46.644493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.615 [2024-11-06 09:18:46.644522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:47.615 [2024-11-06 09:18:46.644535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.615 [2024-11-06 09:18:46.646943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.615 [2024-11-06 09:18:46.646991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:47.615 [2024-11-06 09:18:46.647068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:47.615 [2024-11-06 09:18:46.647134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.615 [2024-11-06 09:18:46.647253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:47.615 spare 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.615 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.874 [2024-11-06 09:18:46.747216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:47.874 [2024-11-06 09:18:46.747278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:47.874 [2024-11-06 09:18:46.747470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:47.874 [2024-11-06 09:18:46.747595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:47.874 [2024-11-06 09:18:46.747606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:47.874 [2024-11-06 09:18:46.747715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.874 "name": "raid_bdev1", 00:25:47.874 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:47.874 "strip_size_kb": 0, 00:25:47.874 "state": "online", 00:25:47.874 "raid_level": "raid1", 00:25:47.874 "superblock": true, 00:25:47.874 "num_base_bdevs": 2, 00:25:47.874 "num_base_bdevs_discovered": 2, 00:25:47.874 "num_base_bdevs_operational": 2, 00:25:47.874 "base_bdevs_list": [ 00:25:47.874 { 00:25:47.874 "name": "spare", 00:25:47.874 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:47.874 "is_configured": true, 00:25:47.874 "data_offset": 256, 00:25:47.874 "data_size": 7936 00:25:47.874 }, 00:25:47.874 { 00:25:47.874 "name": "BaseBdev2", 00:25:47.874 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:47.874 "is_configured": true, 00:25:47.874 "data_offset": 256, 00:25:47.874 "data_size": 7936 00:25:47.874 } 00:25:47.874 ] 00:25:47.874 }' 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.874 09:18:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.441 "name": "raid_bdev1", 00:25:48.441 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:48.441 "strip_size_kb": 0, 00:25:48.441 "state": "online", 00:25:48.441 "raid_level": "raid1", 00:25:48.441 "superblock": true, 00:25:48.441 "num_base_bdevs": 2, 00:25:48.441 "num_base_bdevs_discovered": 2, 00:25:48.441 "num_base_bdevs_operational": 2, 00:25:48.441 "base_bdevs_list": [ 00:25:48.441 { 00:25:48.441 "name": "spare", 00:25:48.441 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:48.441 "is_configured": true, 00:25:48.441 "data_offset": 256, 00:25:48.441 "data_size": 7936 00:25:48.441 }, 00:25:48.441 { 00:25:48.441 "name": "BaseBdev2", 00:25:48.441 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:48.441 "is_configured": true, 00:25:48.441 "data_offset": 256, 00:25:48.441 "data_size": 7936 00:25:48.441 } 00:25:48.441 ] 00:25:48.441 }' 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:48.441 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.442 [2024-11-06 09:18:47.399520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.442 "name": "raid_bdev1", 00:25:48.442 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:48.442 "strip_size_kb": 0, 00:25:48.442 "state": "online", 00:25:48.442 "raid_level": "raid1", 00:25:48.442 "superblock": true, 00:25:48.442 "num_base_bdevs": 2, 00:25:48.442 "num_base_bdevs_discovered": 1, 00:25:48.442 "num_base_bdevs_operational": 1, 00:25:48.442 "base_bdevs_list": [ 00:25:48.442 { 00:25:48.442 "name": null, 00:25:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.442 "is_configured": false, 00:25:48.442 "data_offset": 0, 00:25:48.442 "data_size": 7936 00:25:48.442 }, 00:25:48.442 { 00:25:48.442 "name": "BaseBdev2", 00:25:48.442 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:48.442 "is_configured": true, 00:25:48.442 "data_offset": 256, 00:25:48.442 "data_size": 7936 00:25:48.442 } 00:25:48.442 ] 00:25:48.442 }' 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.442 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.009 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:49.009 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.009 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.009 [2024-11-06 09:18:47.859501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.009 [2024-11-06 09:18:47.859914] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:49.009 [2024-11-06 09:18:47.859945] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:49.009 [2024-11-06 09:18:47.859994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.009 [2024-11-06 09:18:47.876906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:49.009 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.009 09:18:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:49.009 [2024-11-06 09:18:47.879247] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:49.945 "name": "raid_bdev1", 00:25:49.945 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:49.945 "strip_size_kb": 0, 00:25:49.945 "state": "online", 00:25:49.945 "raid_level": "raid1", 00:25:49.945 "superblock": true, 00:25:49.945 "num_base_bdevs": 2, 00:25:49.945 "num_base_bdevs_discovered": 2, 00:25:49.945 "num_base_bdevs_operational": 2, 00:25:49.945 "process": { 00:25:49.945 "type": "rebuild", 00:25:49.945 "target": "spare", 00:25:49.945 "progress": { 00:25:49.945 "blocks": 2560, 00:25:49.945 "percent": 32 00:25:49.945 } 00:25:49.945 }, 00:25:49.945 "base_bdevs_list": [ 00:25:49.945 { 00:25:49.945 "name": "spare", 00:25:49.945 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:49.945 "is_configured": true, 00:25:49.945 "data_offset": 256, 00:25:49.945 "data_size": 7936 00:25:49.945 }, 00:25:49.945 { 00:25:49.945 "name": "BaseBdev2", 00:25:49.945 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:49.945 "is_configured": true, 00:25:49.945 "data_offset": 256, 00:25:49.945 "data_size": 7936 00:25:49.945 } 00:25:49.945 ] 00:25:49.945 }' 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.945 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.204 09:18:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:50.204 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.204 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:50.204 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.205 [2024-11-06 09:18:49.039619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.205 [2024-11-06 09:18:49.085358] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:50.205 [2024-11-06 09:18:49.085445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.205 [2024-11-06 09:18:49.085462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.205 [2024-11-06 09:18:49.085474] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.205 "name": "raid_bdev1", 00:25:50.205 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:50.205 "strip_size_kb": 0, 00:25:50.205 "state": "online", 00:25:50.205 "raid_level": "raid1", 00:25:50.205 "superblock": true, 00:25:50.205 "num_base_bdevs": 2, 00:25:50.205 "num_base_bdevs_discovered": 1, 00:25:50.205 "num_base_bdevs_operational": 1, 00:25:50.205 "base_bdevs_list": [ 00:25:50.205 { 00:25:50.205 "name": null, 00:25:50.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.205 "is_configured": false, 00:25:50.205 "data_offset": 0, 00:25:50.205 "data_size": 7936 00:25:50.205 }, 00:25:50.205 { 00:25:50.205 "name": "BaseBdev2", 00:25:50.205 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:50.205 "is_configured": true, 00:25:50.205 "data_offset": 256, 00:25:50.205 "data_size": 7936 00:25:50.205 } 00:25:50.205 ] 00:25:50.205 }' 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.205 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.772 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:50.772 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.772 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.772 [2024-11-06 09:18:49.568895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:50.772 [2024-11-06 09:18:49.568994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.772 [2024-11-06 09:18:49.569019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:50.772 [2024-11-06 09:18:49.569034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.772 [2024-11-06 09:18:49.569260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.772 [2024-11-06 09:18:49.569280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:50.772 [2024-11-06 09:18:49.569368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:50.772 [2024-11-06 09:18:49.569385] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:50.772 [2024-11-06 09:18:49.569397] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:50.772 [2024-11-06 09:18:49.569434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.772 [2024-11-06 09:18:49.586838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:50.772 spare 00:25:50.772 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.772 09:18:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:50.772 [2024-11-06 09:18:49.589279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.715 "name": "raid_bdev1", 00:25:51.715 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:51.715 "strip_size_kb": 0, 00:25:51.715 "state": "online", 00:25:51.715 "raid_level": "raid1", 00:25:51.715 "superblock": true, 00:25:51.715 "num_base_bdevs": 2, 00:25:51.715 "num_base_bdevs_discovered": 2, 00:25:51.715 "num_base_bdevs_operational": 2, 00:25:51.715 "process": { 00:25:51.715 "type": "rebuild", 00:25:51.715 "target": "spare", 00:25:51.715 "progress": { 00:25:51.715 "blocks": 2560, 00:25:51.715 "percent": 32 00:25:51.715 } 00:25:51.715 }, 00:25:51.715 "base_bdevs_list": [ 00:25:51.715 { 00:25:51.715 "name": "spare", 00:25:51.715 "uuid": "96b7a278-5880-5764-8752-aee7a09f94cf", 00:25:51.715 "is_configured": true, 00:25:51.715 "data_offset": 256, 00:25:51.715 "data_size": 7936 00:25:51.715 }, 00:25:51.715 { 00:25:51.715 "name": "BaseBdev2", 00:25:51.715 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:51.715 "is_configured": true, 00:25:51.715 "data_offset": 256, 00:25:51.715 "data_size": 7936 00:25:51.715 } 00:25:51.715 ] 00:25:51.715 }' 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.715 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.715 [2024-11-06 09:18:50.732633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.974 [2024-11-06 09:18:50.795503] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:51.974 [2024-11-06 09:18:50.795596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.974 [2024-11-06 09:18:50.795618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.974 [2024-11-06 09:18:50.795628] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.974 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.974 "name": "raid_bdev1", 00:25:51.974 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:51.974 "strip_size_kb": 0, 00:25:51.974 "state": "online", 00:25:51.974 "raid_level": "raid1", 00:25:51.974 "superblock": true, 00:25:51.974 "num_base_bdevs": 2, 00:25:51.974 "num_base_bdevs_discovered": 1, 00:25:51.974 "num_base_bdevs_operational": 1, 00:25:51.974 "base_bdevs_list": [ 00:25:51.974 { 00:25:51.974 "name": null, 00:25:51.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.974 "is_configured": false, 00:25:51.974 "data_offset": 0, 00:25:51.974 "data_size": 7936 00:25:51.974 }, 00:25:51.974 { 00:25:51.974 "name": "BaseBdev2", 00:25:51.974 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:51.974 "is_configured": true, 00:25:51.974 "data_offset": 256, 00:25:51.974 "data_size": 7936 00:25:51.974 } 00:25:51.974 ] 00:25:51.974 }' 00:25:51.975 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.975 09:18:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:52.543 "name": "raid_bdev1", 00:25:52.543 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:52.543 "strip_size_kb": 0, 00:25:52.543 "state": "online", 00:25:52.543 "raid_level": "raid1", 00:25:52.543 "superblock": true, 00:25:52.543 "num_base_bdevs": 2, 00:25:52.543 "num_base_bdevs_discovered": 1, 00:25:52.543 "num_base_bdevs_operational": 1, 00:25:52.543 "base_bdevs_list": [ 00:25:52.543 { 00:25:52.543 "name": null, 00:25:52.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.543 "is_configured": false, 00:25:52.543 "data_offset": 0, 00:25:52.543 "data_size": 7936 00:25:52.543 }, 00:25:52.543 { 00:25:52.543 "name": "BaseBdev2", 00:25:52.543 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:52.543 "is_configured": true, 00:25:52.543 "data_offset": 256, 00:25:52.543 "data_size": 7936 00:25:52.543 } 00:25:52.543 ] 00:25:52.543 }' 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.543 [2024-11-06 09:18:51.446806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:52.543 [2024-11-06 09:18:51.446885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.543 [2024-11-06 09:18:51.446918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:52.543 [2024-11-06 09:18:51.446930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.543 [2024-11-06 09:18:51.447127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.543 [2024-11-06 09:18:51.447143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:52.543 [2024-11-06 09:18:51.447210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:52.543 [2024-11-06 09:18:51.447225] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:52.543 [2024-11-06 09:18:51.447238] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:52.543 [2024-11-06 09:18:51.447251] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:52.543 BaseBdev1 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.543 09:18:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:53.480 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:53.480 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:53.480 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.481 "name": "raid_bdev1", 00:25:53.481 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:53.481 "strip_size_kb": 0, 00:25:53.481 "state": "online", 00:25:53.481 "raid_level": "raid1", 00:25:53.481 "superblock": true, 00:25:53.481 "num_base_bdevs": 2, 00:25:53.481 "num_base_bdevs_discovered": 1, 00:25:53.481 "num_base_bdevs_operational": 1, 00:25:53.481 "base_bdevs_list": [ 00:25:53.481 { 00:25:53.481 "name": null, 00:25:53.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.481 "is_configured": false, 00:25:53.481 "data_offset": 0, 00:25:53.481 "data_size": 7936 00:25:53.481 }, 00:25:53.481 { 00:25:53.481 "name": "BaseBdev2", 00:25:53.481 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:53.481 "is_configured": true, 00:25:53.481 "data_offset": 256, 00:25:53.481 "data_size": 7936 00:25:53.481 } 00:25:53.481 ] 00:25:53.481 }' 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.481 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:54.049 "name": "raid_bdev1", 00:25:54.049 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:54.049 "strip_size_kb": 0, 00:25:54.049 "state": "online", 00:25:54.049 "raid_level": "raid1", 00:25:54.049 "superblock": true, 00:25:54.049 "num_base_bdevs": 2, 00:25:54.049 "num_base_bdevs_discovered": 1, 00:25:54.049 "num_base_bdevs_operational": 1, 00:25:54.049 "base_bdevs_list": [ 00:25:54.049 { 00:25:54.049 "name": null, 00:25:54.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.049 "is_configured": false, 00:25:54.049 "data_offset": 0, 00:25:54.049 "data_size": 7936 00:25:54.049 }, 00:25:54.049 { 00:25:54.049 "name": "BaseBdev2", 00:25:54.049 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:54.049 "is_configured": true, 00:25:54.049 "data_offset": 256, 00:25:54.049 "data_size": 7936 00:25:54.049 } 00:25:54.049 ] 00:25:54.049 }' 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:54.049 [2024-11-06 09:18:52.977621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.049 [2024-11-06 09:18:52.977938] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:54.049 [2024-11-06 09:18:52.977973] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:54.049 request: 00:25:54.049 { 00:25:54.049 "base_bdev": "BaseBdev1", 00:25:54.049 "raid_bdev": "raid_bdev1", 00:25:54.049 "method": "bdev_raid_add_base_bdev", 00:25:54.049 "req_id": 1 00:25:54.049 } 00:25:54.049 Got JSON-RPC error response 00:25:54.049 response: 00:25:54.049 { 00:25:54.049 "code": -22, 00:25:54.049 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:54.049 } 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:54.049 09:18:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.985 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.985 09:18:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.985 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.985 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.244 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.244 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.244 "name": "raid_bdev1", 00:25:55.244 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:55.244 "strip_size_kb": 0, 00:25:55.244 "state": "online", 00:25:55.244 "raid_level": "raid1", 00:25:55.244 "superblock": true, 00:25:55.244 "num_base_bdevs": 2, 00:25:55.244 "num_base_bdevs_discovered": 1, 00:25:55.244 "num_base_bdevs_operational": 1, 00:25:55.244 "base_bdevs_list": [ 00:25:55.244 { 00:25:55.244 "name": null, 00:25:55.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.244 "is_configured": false, 00:25:55.244 "data_offset": 0, 00:25:55.244 "data_size": 7936 00:25:55.244 }, 00:25:55.244 { 00:25:55.244 "name": "BaseBdev2", 00:25:55.244 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:55.244 "is_configured": true, 00:25:55.244 "data_offset": 256, 00:25:55.244 "data_size": 7936 00:25:55.244 } 00:25:55.244 ] 00:25:55.244 }' 00:25:55.244 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.244 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.503 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.503 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:55.504 "name": "raid_bdev1", 00:25:55.504 "uuid": "cc2518db-1cca-4efc-ae15-0171ed45e8e1", 00:25:55.504 "strip_size_kb": 0, 00:25:55.504 "state": "online", 00:25:55.504 "raid_level": "raid1", 00:25:55.504 "superblock": true, 00:25:55.504 "num_base_bdevs": 2, 00:25:55.504 "num_base_bdevs_discovered": 1, 00:25:55.504 "num_base_bdevs_operational": 1, 00:25:55.504 "base_bdevs_list": [ 00:25:55.504 { 00:25:55.504 "name": null, 00:25:55.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.504 "is_configured": false, 00:25:55.504 "data_offset": 0, 00:25:55.504 "data_size": 7936 00:25:55.504 }, 00:25:55.504 { 00:25:55.504 "name": "BaseBdev2", 00:25:55.504 "uuid": "a99af4af-6d10-5e70-9541-83d29eec7926", 00:25:55.504 "is_configured": true, 00:25:55.504 "data_offset": 256, 00:25:55.504 "data_size": 7936 00:25:55.504 } 00:25:55.504 ] 00:25:55.504 }' 00:25:55.504 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.763 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88763 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88763 ']' 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88763 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88763 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:55.764 killing process with pid 88763 00:25:55.764 Received shutdown signal, test time was about 60.000000 seconds 00:25:55.764 00:25:55.764 Latency(us) 00:25:55.764 [2024-11-06T09:18:54.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.764 [2024-11-06T09:18:54.804Z] =================================================================================================================== 00:25:55.764 [2024-11-06T09:18:54.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88763' 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88763 00:25:55.764 [2024-11-06 09:18:54.653229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:55.764 09:18:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88763 00:25:55.764 [2024-11-06 09:18:54.653394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:55.764 [2024-11-06 09:18:54.653446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:55.764 [2024-11-06 09:18:54.653462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:56.023 [2024-11-06 09:18:54.985630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:57.434 ************************************ 00:25:57.434 END TEST raid_rebuild_test_sb_md_interleaved 00:25:57.434 ************************************ 00:25:57.434 09:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:25:57.434 00:25:57.434 real 0m17.863s 00:25:57.434 user 0m23.409s 00:25:57.434 sys 0m1.909s 00:25:57.434 09:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:57.434 09:18:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:57.434 09:18:56 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:25:57.434 09:18:56 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:25:57.434 09:18:56 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88763 ']' 00:25:57.434 09:18:56 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88763 00:25:57.434 09:18:56 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:25:57.434 ************************************ 00:25:57.434 END TEST bdev_raid 00:25:57.434 ************************************ 00:25:57.434 00:25:57.434 real 12m0.945s 00:25:57.434 user 16m7.854s 00:25:57.434 sys 2m7.183s 00:25:57.434 09:18:56 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:57.434 09:18:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.434 09:18:56 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:57.434 09:18:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:57.434 09:18:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:57.434 09:18:56 -- common/autotest_common.sh@10 -- # set +x 00:25:57.434 ************************************ 00:25:57.434 START TEST spdkcli_raid 00:25:57.435 ************************************ 00:25:57.435 09:18:56 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:57.435 * Looking for test storage... 00:25:57.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:57.435 09:18:56 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:57.435 09:18:56 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:25:57.435 09:18:56 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:57.694 09:18:56 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.694 09:18:56 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:25:57.694 09:18:56 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.694 09:18:56 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:57.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.694 --rc genhtml_branch_coverage=1 00:25:57.694 --rc genhtml_function_coverage=1 00:25:57.694 --rc genhtml_legend=1 00:25:57.694 --rc geninfo_all_blocks=1 00:25:57.694 --rc geninfo_unexecuted_blocks=1 00:25:57.694 00:25:57.694 ' 00:25:57.694 09:18:56 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:57.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.694 --rc genhtml_branch_coverage=1 00:25:57.694 --rc genhtml_function_coverage=1 00:25:57.694 --rc genhtml_legend=1 00:25:57.695 --rc geninfo_all_blocks=1 00:25:57.695 --rc geninfo_unexecuted_blocks=1 00:25:57.695 00:25:57.695 ' 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:57.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.695 --rc genhtml_branch_coverage=1 00:25:57.695 --rc genhtml_function_coverage=1 00:25:57.695 --rc genhtml_legend=1 00:25:57.695 --rc geninfo_all_blocks=1 00:25:57.695 --rc geninfo_unexecuted_blocks=1 00:25:57.695 00:25:57.695 ' 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:57.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.695 --rc genhtml_branch_coverage=1 00:25:57.695 --rc genhtml_function_coverage=1 00:25:57.695 --rc genhtml_legend=1 00:25:57.695 --rc geninfo_all_blocks=1 00:25:57.695 --rc geninfo_unexecuted_blocks=1 00:25:57.695 00:25:57.695 ' 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:25:57.695 09:18:56 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89439 00:25:57.695 09:18:56 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89439 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 89439 ']' 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:57.695 09:18:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.954 [2024-11-06 09:18:56.744627] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:25:57.954 [2024-11-06 09:18:56.744934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89439 ] 00:25:57.954 [2024-11-06 09:18:56.917844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:58.212 [2024-11-06 09:18:57.079895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.212 [2024-11-06 09:18:57.079927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.146 09:18:58 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:59.146 09:18:58 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:25:59.146 09:18:58 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:25:59.146 09:18:58 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.146 09:18:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.146 09:18:58 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:25:59.146 09:18:58 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:59.146 09:18:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.146 09:18:58 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:59.146 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:59.146 ' 00:26:01.046 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:26:01.046 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:26:01.046 09:18:59 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:26:01.046 09:18:59 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.046 09:18:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.046 09:18:59 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:26:01.046 09:18:59 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:01.046 09:18:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.046 09:18:59 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:26:01.046 ' 00:26:01.982 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:26:01.982 09:19:00 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:26:01.982 09:19:00 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.982 09:19:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.240 09:19:01 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:26:02.240 09:19:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.240 09:19:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.240 09:19:01 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:26:02.240 09:19:01 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:26:02.807 09:19:01 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:26:02.807 09:19:01 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:26:02.807 09:19:01 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:26:02.807 09:19:01 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:02.807 09:19:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.807 09:19:01 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:26:02.807 09:19:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.807 09:19:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.807 09:19:01 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:26:02.807 ' 00:26:03.743 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:26:04.001 09:19:02 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:26:04.001 09:19:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.001 09:19:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.001 09:19:02 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:26:04.001 09:19:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.001 09:19:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.001 09:19:02 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:26:04.001 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:26:04.001 ' 00:26:05.378 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:26:05.378 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:26:05.378 09:19:04 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:26:05.378 09:19:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:05.378 09:19:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.637 09:19:04 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89439 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89439 ']' 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89439 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89439 00:26:05.637 killing process with pid 89439 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89439' 00:26:05.637 09:19:04 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 89439 00:26:05.638 09:19:04 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 89439 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89439 ']' 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89439 00:26:08.195 Process with pid 89439 is not found 00:26:08.195 09:19:07 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89439 ']' 00:26:08.195 09:19:07 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89439 00:26:08.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (89439) - No such process 00:26:08.195 09:19:07 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 89439 is not found' 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:08.195 09:19:07 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:08.195 00:26:08.195 real 0m10.803s 00:26:08.195 user 0m22.276s 00:26:08.195 sys 0m1.159s 00:26:08.195 09:19:07 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:08.195 09:19:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:08.195 ************************************ 00:26:08.195 END TEST spdkcli_raid 00:26:08.195 ************************************ 00:26:08.195 09:19:07 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:08.195 09:19:07 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:08.195 09:19:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:08.195 09:19:07 -- common/autotest_common.sh@10 -- # set +x 00:26:08.195 ************************************ 00:26:08.195 START TEST blockdev_raid5f 00:26:08.195 ************************************ 00:26:08.195 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:08.455 * Looking for test storage... 00:26:08.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.455 09:19:07 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:08.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.455 --rc genhtml_branch_coverage=1 00:26:08.455 --rc genhtml_function_coverage=1 00:26:08.455 --rc genhtml_legend=1 00:26:08.455 --rc geninfo_all_blocks=1 00:26:08.455 --rc geninfo_unexecuted_blocks=1 00:26:08.455 00:26:08.455 ' 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:08.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.455 --rc genhtml_branch_coverage=1 00:26:08.455 --rc genhtml_function_coverage=1 00:26:08.455 --rc genhtml_legend=1 00:26:08.455 --rc geninfo_all_blocks=1 00:26:08.455 --rc geninfo_unexecuted_blocks=1 00:26:08.455 00:26:08.455 ' 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:08.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.455 --rc genhtml_branch_coverage=1 00:26:08.455 --rc genhtml_function_coverage=1 00:26:08.455 --rc genhtml_legend=1 00:26:08.455 --rc geninfo_all_blocks=1 00:26:08.455 --rc geninfo_unexecuted_blocks=1 00:26:08.455 00:26:08.455 ' 00:26:08.455 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:08.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.455 --rc genhtml_branch_coverage=1 00:26:08.455 --rc genhtml_function_coverage=1 00:26:08.455 --rc genhtml_legend=1 00:26:08.455 --rc geninfo_all_blocks=1 00:26:08.455 --rc geninfo_unexecuted_blocks=1 00:26:08.455 00:26:08.455 ' 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:08.455 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89726 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89726 00:26:08.715 09:19:07 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 89726 ']' 00:26:08.715 09:19:07 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.715 09:19:07 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:08.715 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:08.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.715 09:19:07 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.715 09:19:07 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:08.715 09:19:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:08.715 [2024-11-06 09:19:07.633658] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:08.715 [2024-11-06 09:19:07.634711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89726 ] 00:26:09.013 [2024-11-06 09:19:07.837942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.013 [2024-11-06 09:19:07.969408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.976 09:19:08 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:09.976 09:19:08 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:26:09.976 09:19:08 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:26:09.976 09:19:08 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:26:09.976 09:19:08 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:26:09.976 09:19:08 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.976 09:19:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:09.976 Malloc0 00:26:10.236 Malloc1 00:26:10.236 Malloc2 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:26:10.236 09:19:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "432fbe42-825a-4ecb-869b-bebde838d472"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "432fbe42-825a-4ecb-869b-bebde838d472",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "432fbe42-825a-4ecb-869b-bebde838d472",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b076476a-3f1b-4f2a-9204-9fa4eda0bc48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6d997671-84ce-4eac-8241-9ca2cf842f89",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f582b237-d231-4099-b361-ce39be64e7a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:10.236 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:26:10.495 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:26:10.495 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:26:10.495 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:26:10.495 09:19:09 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89726 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 89726 ']' 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 89726 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89726 00:26:10.495 killing process with pid 89726 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89726' 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 89726 00:26:10.495 09:19:09 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 89726 00:26:13.790 09:19:12 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:13.790 09:19:12 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:13.790 09:19:12 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:26:13.790 09:19:12 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:13.790 09:19:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:13.790 ************************************ 00:26:13.790 START TEST bdev_hello_world 00:26:13.790 ************************************ 00:26:13.790 09:19:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:13.790 [2024-11-06 09:19:12.332330] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:13.790 [2024-11-06 09:19:12.332679] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89799 ] 00:26:13.790 [2024-11-06 09:19:12.520815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.790 [2024-11-06 09:19:12.652009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.356 [2024-11-06 09:19:13.215352] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:14.356 [2024-11-06 09:19:13.215636] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:26:14.356 [2024-11-06 09:19:13.215679] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:14.356 [2024-11-06 09:19:13.216318] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:14.356 [2024-11-06 09:19:13.216518] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:14.356 [2024-11-06 09:19:13.216541] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:14.356 [2024-11-06 09:19:13.216618] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:14.356 00:26:14.356 [2024-11-06 09:19:13.216646] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:15.735 00:26:15.735 real 0m2.510s 00:26:15.735 user 0m2.077s 00:26:15.735 sys 0m0.308s 00:26:15.735 09:19:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:15.735 ************************************ 00:26:15.735 END TEST bdev_hello_world 00:26:15.735 ************************************ 00:26:15.735 09:19:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:15.998 09:19:14 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:26:15.998 09:19:14 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:15.998 09:19:14 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:15.998 09:19:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:15.998 ************************************ 00:26:15.998 START TEST bdev_bounds 00:26:15.998 ************************************ 00:26:15.998 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89841 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:15.999 Process bdevio pid: 89841 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89841' 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89841 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 89841 ']' 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.999 09:19:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:15.999 [2024-11-06 09:19:14.928805] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:15.999 [2024-11-06 09:19:14.928966] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89841 ] 00:26:16.259 [2024-11-06 09:19:15.100986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:16.259 [2024-11-06 09:19:15.265801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.259 [2024-11-06 09:19:15.265896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.259 [2024-11-06 09:19:15.265866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.197 09:19:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.197 09:19:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:26:17.197 09:19:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:17.197 I/O targets: 00:26:17.197 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:26:17.197 00:26:17.197 00:26:17.197 CUnit - A unit testing framework for C - Version 2.1-3 00:26:17.197 http://cunit.sourceforge.net/ 00:26:17.197 00:26:17.197 00:26:17.197 Suite: bdevio tests on: raid5f 00:26:17.197 Test: blockdev write read block ...passed 00:26:17.197 Test: blockdev write zeroes read block ...passed 00:26:17.197 Test: blockdev write zeroes read no split ...passed 00:26:17.197 Test: blockdev write zeroes read split ...passed 00:26:17.455 Test: blockdev write zeroes read split partial ...passed 00:26:17.455 Test: blockdev reset ...passed 00:26:17.455 Test: blockdev write read 8 blocks ...passed 00:26:17.455 Test: blockdev write read size > 128k ...passed 00:26:17.455 Test: blockdev write read invalid size ...passed 00:26:17.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:17.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:17.455 Test: blockdev write read max offset ...passed 00:26:17.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:17.455 Test: blockdev writev readv 8 blocks ...passed 00:26:17.455 Test: blockdev writev readv 30 x 1block ...passed 00:26:17.455 Test: blockdev writev readv block ...passed 00:26:17.455 Test: blockdev writev readv size > 128k ...passed 00:26:17.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:17.455 Test: blockdev comparev and writev ...passed 00:26:17.455 Test: blockdev nvme passthru rw ...passed 00:26:17.455 Test: blockdev nvme passthru vendor specific ...passed 00:26:17.455 Test: blockdev nvme admin passthru ...passed 00:26:17.455 Test: blockdev copy ...passed 00:26:17.455 00:26:17.455 Run Summary: Type Total Ran Passed Failed Inactive 00:26:17.455 suites 1 1 n/a 0 0 00:26:17.455 tests 23 23 23 0 0 00:26:17.455 asserts 130 130 130 0 n/a 00:26:17.455 00:26:17.455 Elapsed time = 0.655 seconds 00:26:17.455 0 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89841 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 89841 ']' 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 89841 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89841 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:17.455 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:17.455 killing process with pid 89841 00:26:17.456 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89841' 00:26:17.456 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 89841 00:26:17.456 09:19:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 89841 00:26:18.861 09:19:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:18.861 00:26:18.861 real 0m3.059s 00:26:18.861 user 0m7.661s 00:26:18.861 sys 0m0.428s 00:26:18.861 09:19:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:18.861 09:19:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:18.861 ************************************ 00:26:18.861 END TEST bdev_bounds 00:26:18.861 ************************************ 00:26:19.119 09:19:17 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:19.119 09:19:17 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:19.119 09:19:17 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:19.119 09:19:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:19.119 ************************************ 00:26:19.119 START TEST bdev_nbd 00:26:19.119 ************************************ 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89906 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89906 /var/tmp/spdk-nbd.sock 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 89906 ']' 00:26:19.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:19.119 09:19:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:19.119 [2024-11-06 09:19:18.073861] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:19.119 [2024-11-06 09:19:18.074474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.416 [2024-11-06 09:19:18.244073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.416 [2024-11-06 09:19:18.377484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:19.983 09:19:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:20.241 1+0 records in 00:26:20.241 1+0 records out 00:26:20.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461164 s, 8.9 MB/s 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:26:20.241 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:20.500 { 00:26:20.500 "nbd_device": "/dev/nbd0", 00:26:20.500 "bdev_name": "raid5f" 00:26:20.500 } 00:26:20.500 ]' 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:20.500 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:20.501 { 00:26:20.501 "nbd_device": "/dev/nbd0", 00:26:20.501 "bdev_name": "raid5f" 00:26:20.501 } 00:26:20.501 ]' 00:26:20.501 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:20.759 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:21.034 09:19:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:21.292 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:26:21.551 /dev/nbd0 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:21.551 1+0 records in 00:26:21.551 1+0 records out 00:26:21.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330987 s, 12.4 MB/s 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:26:21.551 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:21.552 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:21.552 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:21.552 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:21.552 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:21.810 { 00:26:21.810 "nbd_device": "/dev/nbd0", 00:26:21.810 "bdev_name": "raid5f" 00:26:21.810 } 00:26:21.810 ]' 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:21.810 { 00:26:21.810 "nbd_device": "/dev/nbd0", 00:26:21.810 "bdev_name": "raid5f" 00:26:21.810 } 00:26:21.810 ]' 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:21.810 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:22.069 256+0 records in 00:26:22.069 256+0 records out 00:26:22.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013546 s, 77.4 MB/s 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:22.069 256+0 records in 00:26:22.069 256+0 records out 00:26:22.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0390369 s, 26.9 MB/s 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:22.069 09:19:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:22.328 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:26:22.586 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:22.844 malloc_lvol_verify 00:26:22.844 09:19:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:23.103 306df40b-b5ac-47ed-9689-ddd77363a499 00:26:23.103 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:23.361 00157790-3407-4ba7-a5b0-f0b9b9651850 00:26:23.361 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:23.619 /dev/nbd0 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:26:23.619 mke2fs 1.47.0 (5-Feb-2023) 00:26:23.619 Discarding device blocks: 0/4096 done 00:26:23.619 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:23.619 00:26:23.619 Allocating group tables: 0/1 done 00:26:23.619 Writing inode tables: 0/1 done 00:26:23.619 Creating journal (1024 blocks): done 00:26:23.619 Writing superblocks and filesystem accounting information: 0/1 done 00:26:23.619 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.619 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89906 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 89906 ']' 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 89906 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89906 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:24.188 killing process with pid 89906 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89906' 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 89906 00:26:24.188 09:19:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 89906 00:26:25.574 09:19:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:25.574 00:26:25.574 real 0m6.648s 00:26:25.574 user 0m9.093s 00:26:25.574 sys 0m1.652s 00:26:25.574 09:19:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:25.574 ************************************ 00:26:25.574 END TEST bdev_nbd 00:26:25.574 ************************************ 00:26:25.574 09:19:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 09:19:24 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:26:25.834 09:19:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:26:25.834 09:19:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:26:25.834 09:19:24 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:26:25.834 09:19:24 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:25.834 09:19:24 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:25.834 09:19:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 ************************************ 00:26:25.834 START TEST bdev_fio 00:26:25.834 ************************************ 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:25.834 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:25.834 ************************************ 00:26:25.834 START TEST bdev_fio_rw_verify 00:26:25.834 ************************************ 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:25.834 09:19:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:26.093 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:26.093 fio-3.35 00:26:26.093 Starting 1 thread 00:26:38.297 00:26:38.297 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90123: Wed Nov 6 09:19:36 2024 00:26:38.297 read: IOPS=9676, BW=37.8MiB/s (39.6MB/s)(378MiB/10001msec) 00:26:38.297 slat (usec): min=19, max=220, avg=24.34, stdev= 4.27 00:26:38.297 clat (usec): min=12, max=539, avg=163.04, stdev=59.63 00:26:38.297 lat (usec): min=35, max=571, avg=187.38, stdev=60.32 00:26:38.297 clat percentiles (usec): 00:26:38.297 | 50.000th=[ 161], 99.000th=[ 293], 99.900th=[ 343], 99.990th=[ 400], 00:26:38.297 | 99.999th=[ 537] 00:26:38.297 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(393MiB/9885msec); 0 zone resets 00:26:38.297 slat (usec): min=8, max=662, avg=21.28, stdev= 5.88 00:26:38.297 clat (usec): min=72, max=1447, avg=376.89, stdev=56.34 00:26:38.297 lat (usec): min=92, max=1752, avg=398.17, stdev=57.67 00:26:38.297 clat percentiles (usec): 00:26:38.297 | 50.000th=[ 379], 99.000th=[ 519], 99.900th=[ 627], 99.990th=[ 1057], 00:26:38.297 | 99.999th=[ 1369] 00:26:38.297 bw ( KiB/s): min=36056, max=46712, per=98.56%, avg=40152.42, stdev=2502.60, samples=19 00:26:38.297 iops : min= 9014, max=11678, avg=10038.11, stdev=625.65, samples=19 00:26:38.297 lat (usec) : 20=0.01%, 50=0.01%, 100=9.31%, 250=36.51%, 500=53.22% 00:26:38.297 lat (usec) : 750=0.94%, 1000=0.02% 00:26:38.297 lat (msec) : 2=0.01% 00:26:38.297 cpu : usr=98.75%, sys=0.38%, ctx=38, majf=0, minf=8249 00:26:38.297 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.297 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.297 issued rwts: total=96775,100674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.297 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.297 00:26:38.297 Run status group 0 (all jobs): 00:26:38.297 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=378MiB (396MB), run=10001-10001msec 00:26:38.297 WRITE: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=393MiB (412MB), run=9885-9885msec 00:26:38.865 ----------------------------------------------------- 00:26:38.865 Suppressions used: 00:26:38.865 count bytes template 00:26:38.865 1 7 /usr/src/fio/parse.c 00:26:38.865 871 83616 /usr/src/fio/iolog.c 00:26:38.865 1 8 libtcmalloc_minimal.so 00:26:38.865 1 904 libcrypto.so 00:26:38.865 ----------------------------------------------------- 00:26:38.865 00:26:38.865 00:26:38.865 real 0m12.978s 00:26:38.865 user 0m13.370s 00:26:38.865 sys 0m0.829s 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:38.865 ************************************ 00:26:38.865 END TEST bdev_fio_rw_verify 00:26:38.865 ************************************ 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "432fbe42-825a-4ecb-869b-bebde838d472"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "432fbe42-825a-4ecb-869b-bebde838d472",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "432fbe42-825a-4ecb-869b-bebde838d472",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b076476a-3f1b-4f2a-9204-9fa4eda0bc48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6d997671-84ce-4eac-8241-9ca2cf842f89",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f582b237-d231-4099-b361-ce39be64e7a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:38.865 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:39.153 /home/vagrant/spdk_repo/spdk 00:26:39.153 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:39.153 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:39.153 09:19:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:39.153 00:26:39.153 real 0m13.233s 00:26:39.153 user 0m13.481s 00:26:39.153 sys 0m0.946s 00:26:39.153 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:39.153 09:19:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:39.153 ************************************ 00:26:39.153 END TEST bdev_fio 00:26:39.153 ************************************ 00:26:39.153 09:19:37 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:39.153 09:19:37 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:39.153 09:19:37 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:26:39.153 09:19:37 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:39.153 09:19:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:39.153 ************************************ 00:26:39.153 START TEST bdev_verify 00:26:39.153 ************************************ 00:26:39.153 09:19:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:39.153 [2024-11-06 09:19:38.071351] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:39.153 [2024-11-06 09:19:38.071482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90287 ] 00:26:39.412 [2024-11-06 09:19:38.254476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.412 [2024-11-06 09:19:38.380162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.412 [2024-11-06 09:19:38.380196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.979 Running I/O for 5 seconds... 00:26:42.291 11296.00 IOPS, 44.12 MiB/s [2024-11-06T09:19:42.266Z] 11919.50 IOPS, 46.56 MiB/s [2024-11-06T09:19:43.203Z] 12428.67 IOPS, 48.55 MiB/s [2024-11-06T09:19:44.156Z] 12511.75 IOPS, 48.87 MiB/s [2024-11-06T09:19:44.156Z] 12902.20 IOPS, 50.40 MiB/s 00:26:45.116 Latency(us) 00:26:45.116 [2024-11-06T09:19:44.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.116 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:45.116 Verification LBA range: start 0x0 length 0x2000 00:26:45.116 raid5f : 5.01 6427.84 25.11 0.00 0.00 29879.17 299.39 26740.79 00:26:45.116 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:45.116 Verification LBA range: start 0x2000 length 0x2000 00:26:45.116 raid5f : 5.02 6487.14 25.34 0.00 0.00 29550.57 222.07 26740.79 00:26:45.116 [2024-11-06T09:19:44.156Z] =================================================================================================================== 00:26:45.116 [2024-11-06T09:19:44.156Z] Total : 12914.98 50.45 0.00 0.00 29714.01 222.07 26740.79 00:26:46.494 00:26:46.494 real 0m7.490s 00:26:46.495 user 0m13.801s 00:26:46.495 sys 0m0.288s 00:26:46.495 09:19:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:46.495 ************************************ 00:26:46.495 09:19:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:46.495 END TEST bdev_verify 00:26:46.495 ************************************ 00:26:46.495 09:19:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:46.495 09:19:45 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:26:46.495 09:19:45 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:46.495 09:19:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:46.753 ************************************ 00:26:46.753 START TEST bdev_verify_big_io 00:26:46.753 ************************************ 00:26:46.753 09:19:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:46.753 [2024-11-06 09:19:45.636746] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:46.753 [2024-11-06 09:19:45.636881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90380 ] 00:26:47.010 [2024-11-06 09:19:45.819883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:47.010 [2024-11-06 09:19:45.947216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.010 [2024-11-06 09:19:45.947245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.575 Running I/O for 5 seconds... 00:26:49.889 506.00 IOPS, 31.62 MiB/s [2024-11-06T09:19:49.875Z] 634.00 IOPS, 39.62 MiB/s [2024-11-06T09:19:50.825Z] 654.33 IOPS, 40.90 MiB/s [2024-11-06T09:19:51.760Z] 681.25 IOPS, 42.58 MiB/s [2024-11-06T09:19:52.018Z] 697.40 IOPS, 43.59 MiB/s 00:26:52.978 Latency(us) 00:26:52.978 [2024-11-06T09:19:52.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.978 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:52.978 Verification LBA range: start 0x0 length 0x200 00:26:52.978 raid5f : 5.34 356.65 22.29 0.00 0.00 8848129.87 212.20 409323.64 00:26:52.978 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:52.978 Verification LBA range: start 0x200 length 0x200 00:26:52.978 raid5f : 5.33 357.32 22.33 0.00 0.00 8773867.45 294.45 409323.64 00:26:52.978 [2024-11-06T09:19:52.018Z] =================================================================================================================== 00:26:52.978 [2024-11-06T09:19:52.018Z] Total : 713.98 44.62 0.00 0.00 8810998.66 212.20 409323.64 00:26:54.354 00:26:54.354 real 0m7.843s 00:26:54.354 user 0m14.481s 00:26:54.354 sys 0m0.307s 00:26:54.354 09:19:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:54.354 09:19:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:54.354 ************************************ 00:26:54.354 END TEST bdev_verify_big_io 00:26:54.354 ************************************ 00:26:54.614 09:19:53 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:54.614 09:19:53 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:26:54.614 09:19:53 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:54.614 09:19:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:54.614 ************************************ 00:26:54.614 START TEST bdev_write_zeroes 00:26:54.614 ************************************ 00:26:54.614 09:19:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:54.614 [2024-11-06 09:19:53.551006] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:54.614 [2024-11-06 09:19:53.551139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90484 ] 00:26:54.872 [2024-11-06 09:19:53.732509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.872 [2024-11-06 09:19:53.854980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.467 Running I/O for 1 seconds... 00:26:56.403 23631.00 IOPS, 92.31 MiB/s 00:26:56.403 Latency(us) 00:26:56.403 [2024-11-06T09:19:55.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.403 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:56.403 raid5f : 1.01 23607.60 92.22 0.00 0.00 5403.33 1592.34 7527.43 00:26:56.403 [2024-11-06T09:19:55.443Z] =================================================================================================================== 00:26:56.403 [2024-11-06T09:19:55.443Z] Total : 23607.60 92.22 0.00 0.00 5403.33 1592.34 7527.43 00:26:58.306 00:26:58.306 real 0m3.457s 00:26:58.306 user 0m3.044s 00:26:58.306 sys 0m0.284s 00:26:58.306 09:19:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:58.306 09:19:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:58.306 ************************************ 00:26:58.306 END TEST bdev_write_zeroes 00:26:58.306 ************************************ 00:26:58.306 09:19:56 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.306 09:19:56 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:26:58.306 09:19:56 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:58.306 09:19:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:58.306 ************************************ 00:26:58.306 START TEST bdev_json_nonenclosed 00:26:58.306 ************************************ 00:26:58.306 09:19:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.306 [2024-11-06 09:19:57.075302] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:58.306 [2024-11-06 09:19:57.075446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90537 ] 00:26:58.306 [2024-11-06 09:19:57.258569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.565 [2024-11-06 09:19:57.381957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.565 [2024-11-06 09:19:57.382063] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:58.565 [2024-11-06 09:19:57.382095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:58.565 [2024-11-06 09:19:57.382107] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:58.824 00:26:58.824 real 0m0.670s 00:26:58.824 user 0m0.421s 00:26:58.824 sys 0m0.144s 00:26:58.824 09:19:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:58.824 09:19:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:58.824 ************************************ 00:26:58.824 END TEST bdev_json_nonenclosed 00:26:58.824 ************************************ 00:26:58.824 09:19:57 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.824 09:19:57 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:26:58.824 09:19:57 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:58.824 09:19:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:58.824 ************************************ 00:26:58.824 START TEST bdev_json_nonarray 00:26:58.824 ************************************ 00:26:58.824 09:19:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.824 [2024-11-06 09:19:57.824232] Starting SPDK v25.01-pre git sha1 cc533a3e5 / DPDK 24.03.0 initialization... 00:26:58.824 [2024-11-06 09:19:57.824379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90563 ] 00:26:59.083 [2024-11-06 09:19:58.009840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.342 [2024-11-06 09:19:58.131656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.342 [2024-11-06 09:19:58.131766] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:59.342 [2024-11-06 09:19:58.131790] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:59.342 [2024-11-06 09:19:58.131812] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:59.602 00:26:59.602 real 0m0.684s 00:26:59.602 user 0m0.428s 00:26:59.602 sys 0m0.151s 00:26:59.602 09:19:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:59.602 09:19:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:59.602 ************************************ 00:26:59.602 END TEST bdev_json_nonarray 00:26:59.602 ************************************ 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:26:59.602 09:19:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:26:59.602 00:26:59.602 real 0m51.255s 00:26:59.602 user 1m9.546s 00:26:59.602 sys 0m5.685s 00:26:59.602 09:19:58 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:59.602 09:19:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:59.602 ************************************ 00:26:59.602 END TEST blockdev_raid5f 00:26:59.602 ************************************ 00:26:59.602 09:19:58 -- spdk/autotest.sh@194 -- # uname -s 00:26:59.602 09:19:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@256 -- # timing_exit lib 00:26:59.602 09:19:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.602 09:19:58 -- common/autotest_common.sh@10 -- # set +x 00:26:59.602 09:19:58 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:59.602 09:19:58 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:59.602 09:19:58 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:59.602 09:19:58 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:59.602 09:19:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.602 09:19:58 -- common/autotest_common.sh@10 -- # set +x 00:26:59.602 09:19:58 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:59.602 09:19:58 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:26:59.602 09:19:58 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:26:59.602 09:19:58 -- common/autotest_common.sh@10 -- # set +x 00:27:02.144 INFO: APP EXITING 00:27:02.144 INFO: killing all VMs 00:27:02.144 INFO: killing vhost app 00:27:02.144 INFO: EXIT DONE 00:27:02.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:02.402 Waiting for block devices as requested 00:27:02.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:02.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:03.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.606 Cleaning 00:27:03.606 Removing: /var/run/dpdk/spdk0/config 00:27:03.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:03.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:03.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:03.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:03.606 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:03.606 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:03.606 Removing: /dev/shm/spdk_tgt_trace.pid56679 00:27:03.606 Removing: /var/run/dpdk/spdk0 00:27:03.606 Removing: /var/run/dpdk/spdk_pid56439 00:27:03.606 Removing: /var/run/dpdk/spdk_pid56679 00:27:03.606 Removing: /var/run/dpdk/spdk_pid56914 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57018 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57074 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57202 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57231 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57441 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57547 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57655 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57782 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57895 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57935 00:27:03.606 Removing: /var/run/dpdk/spdk_pid57971 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58047 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58159 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58617 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58692 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58766 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58782 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58939 00:27:03.606 Removing: /var/run/dpdk/spdk_pid58960 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59116 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59132 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59207 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59225 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59295 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59318 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59513 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59550 00:27:03.606 Removing: /var/run/dpdk/spdk_pid59639 00:27:03.606 Removing: /var/run/dpdk/spdk_pid60995 00:27:03.606 Removing: /var/run/dpdk/spdk_pid61201 00:27:03.606 Removing: /var/run/dpdk/spdk_pid61341 00:27:03.606 Removing: /var/run/dpdk/spdk_pid61990 00:27:03.606 Removing: /var/run/dpdk/spdk_pid62196 00:27:03.606 Removing: /var/run/dpdk/spdk_pid62347 00:27:03.865 Removing: /var/run/dpdk/spdk_pid62985 00:27:03.865 Removing: /var/run/dpdk/spdk_pid63315 00:27:03.865 Removing: /var/run/dpdk/spdk_pid63461 00:27:03.865 Removing: /var/run/dpdk/spdk_pid64846 00:27:03.865 Removing: /var/run/dpdk/spdk_pid65099 00:27:03.865 Removing: /var/run/dpdk/spdk_pid65245 00:27:03.865 Removing: /var/run/dpdk/spdk_pid66642 00:27:03.865 Removing: /var/run/dpdk/spdk_pid66895 00:27:03.865 Removing: /var/run/dpdk/spdk_pid67035 00:27:03.865 Removing: /var/run/dpdk/spdk_pid68421 00:27:03.865 Removing: /var/run/dpdk/spdk_pid68856 00:27:03.865 Removing: /var/run/dpdk/spdk_pid69007 00:27:03.865 Removing: /var/run/dpdk/spdk_pid70483 00:27:03.865 Removing: /var/run/dpdk/spdk_pid70749 00:27:03.865 Removing: /var/run/dpdk/spdk_pid70889 00:27:03.865 Removing: /var/run/dpdk/spdk_pid72375 00:27:03.865 Removing: /var/run/dpdk/spdk_pid72634 00:27:03.865 Removing: /var/run/dpdk/spdk_pid72784 00:27:03.865 Removing: /var/run/dpdk/spdk_pid74260 00:27:03.865 Removing: /var/run/dpdk/spdk_pid74753 00:27:03.865 Removing: /var/run/dpdk/spdk_pid74893 00:27:03.865 Removing: /var/run/dpdk/spdk_pid75041 00:27:03.865 Removing: /var/run/dpdk/spdk_pid75466 00:27:03.865 Removing: /var/run/dpdk/spdk_pid76204 00:27:03.865 Removing: /var/run/dpdk/spdk_pid76583 00:27:03.865 Removing: /var/run/dpdk/spdk_pid77269 00:27:03.865 Removing: /var/run/dpdk/spdk_pid77721 00:27:03.865 Removing: /var/run/dpdk/spdk_pid78480 00:27:03.865 Removing: /var/run/dpdk/spdk_pid78889 00:27:03.865 Removing: /var/run/dpdk/spdk_pid80837 00:27:03.865 Removing: /var/run/dpdk/spdk_pid81281 00:27:03.865 Removing: /var/run/dpdk/spdk_pid81719 00:27:03.865 Removing: /var/run/dpdk/spdk_pid83822 00:27:03.865 Removing: /var/run/dpdk/spdk_pid84308 00:27:03.865 Removing: /var/run/dpdk/spdk_pid84835 00:27:03.865 Removing: /var/run/dpdk/spdk_pid85899 00:27:03.865 Removing: /var/run/dpdk/spdk_pid86222 00:27:03.865 Removing: /var/run/dpdk/spdk_pid87172 00:27:03.865 Removing: /var/run/dpdk/spdk_pid87493 00:27:03.865 Removing: /var/run/dpdk/spdk_pid88434 00:27:03.865 Removing: /var/run/dpdk/spdk_pid88763 00:27:03.865 Removing: /var/run/dpdk/spdk_pid89439 00:27:03.865 Removing: /var/run/dpdk/spdk_pid89726 00:27:03.865 Removing: /var/run/dpdk/spdk_pid89799 00:27:03.865 Removing: /var/run/dpdk/spdk_pid89841 00:27:03.865 Removing: /var/run/dpdk/spdk_pid90108 00:27:03.865 Removing: /var/run/dpdk/spdk_pid90287 00:27:03.865 Removing: /var/run/dpdk/spdk_pid90380 00:27:03.865 Removing: /var/run/dpdk/spdk_pid90484 00:27:03.865 Removing: /var/run/dpdk/spdk_pid90537 00:27:03.865 Removing: /var/run/dpdk/spdk_pid90563 00:27:03.865 Clean 00:27:04.124 09:20:02 -- common/autotest_common.sh@1451 -- # return 0 00:27:04.124 09:20:02 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:27:04.124 09:20:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.124 09:20:02 -- common/autotest_common.sh@10 -- # set +x 00:27:04.124 09:20:03 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:27:04.124 09:20:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.124 09:20:03 -- common/autotest_common.sh@10 -- # set +x 00:27:04.124 09:20:03 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:04.124 09:20:03 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:04.124 09:20:03 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:04.124 09:20:03 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:27:04.124 09:20:03 -- spdk/autotest.sh@394 -- # hostname 00:27:04.124 09:20:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:04.383 geninfo: WARNING: invalid characters removed from testname! 00:27:30.947 09:20:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:32.890 09:20:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:35.423 09:20:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:37.362 09:20:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:39.892 09:20:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.816 09:20:40 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:44.348 09:20:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:44.348 09:20:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:44.348 09:20:43 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:44.348 09:20:43 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:44.348 09:20:43 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:44.348 09:20:43 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:44.348 + [[ -n 5207 ]] 00:27:44.348 + sudo kill 5207 00:27:44.357 [Pipeline] } 00:27:44.373 [Pipeline] // timeout 00:27:44.379 [Pipeline] } 00:27:44.394 [Pipeline] // stage 00:27:44.400 [Pipeline] } 00:27:44.415 [Pipeline] // catchError 00:27:44.424 [Pipeline] stage 00:27:44.426 [Pipeline] { (Stop VM) 00:27:44.440 [Pipeline] sh 00:27:44.724 + vagrant halt 00:27:48.028 ==> default: Halting domain... 00:27:54.629 [Pipeline] sh 00:27:54.911 + vagrant destroy -f 00:27:58.201 ==> default: Removing domain... 00:27:58.215 [Pipeline] sh 00:27:58.547 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:27:58.556 [Pipeline] } 00:27:58.571 [Pipeline] // stage 00:27:58.577 [Pipeline] } 00:27:58.592 [Pipeline] // dir 00:27:58.597 [Pipeline] } 00:27:58.611 [Pipeline] // wrap 00:27:58.617 [Pipeline] } 00:27:58.630 [Pipeline] // catchError 00:27:58.640 [Pipeline] stage 00:27:58.642 [Pipeline] { (Epilogue) 00:27:58.656 [Pipeline] sh 00:27:58.939 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:04.221 [Pipeline] catchError 00:28:04.223 [Pipeline] { 00:28:04.236 [Pipeline] sh 00:28:04.519 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:04.520 Artifacts sizes are good 00:28:04.529 [Pipeline] } 00:28:04.543 [Pipeline] // catchError 00:28:04.555 [Pipeline] archiveArtifacts 00:28:04.563 Archiving artifacts 00:28:04.682 [Pipeline] cleanWs 00:28:04.695 [WS-CLEANUP] Deleting project workspace... 00:28:04.695 [WS-CLEANUP] Deferred wipeout is used... 00:28:04.701 [WS-CLEANUP] done 00:28:04.703 [Pipeline] } 00:28:04.719 [Pipeline] // stage 00:28:04.724 [Pipeline] } 00:28:04.738 [Pipeline] // node 00:28:04.744 [Pipeline] End of Pipeline 00:28:04.780 Finished: SUCCESS